Dear Business Partner,
Please send me the suitable resumes along with contact details and rate on c2c of your consultants for the following positions.
Please add vishal@swarkysolutions.com to your hotlist distribution list
Data Developer / Integration Engineer on Hadoop
SAN JOSE, California
San Jose client seeks a Integration Engineer.
Data Management Developer / Integration Engineer with experience and strong working knowledge of Hadoop ecosystem and ETL.
We have a data ingestion software to onboard structured and unstructured data from multiple data sources including
Oracle, Teradata, MySQL & SQLServer. As we continue to grow and sign-on new customers, our data management tools and integration challenges become more important. As a data management developer / integration engineer, you will be working with
internal teams and customers to help on-board data from multiple source systems into our Hadoop DataLake platform.
In this role you will work heavily with JAVA Ingestion software for ETL, traditional databases like Oracle, Teradata and Hive data store on Hadoop platform. You will provide support for day-to-day Data Load activity as well as troubleshooting of Data Ingestion issues.
Additionally, the ideal candidate would have good understanding of Hadoop (MapR distribution preferred). Responsibilities related specifically to Hadoop platform support would include onboarding new clients to our Hadoop platform and helping them get acquainted to the numerous toolsets in the Hadoop ecosystem (like Hive, HUE, Drill, Spark, Impala, Kafka, etc).
What you need for this position:
- Experience with big-data technologies such as Hadoop/Hive, MongoDb, or other NoSQL-based data stores.
- Strong experience with traditional RDBMS systems like Oracle & Teradata.
- 3+ Years of JAVA ETL / Data Integration Experience.
- 2+ Years of experience working with Hadoop ecosystem - Cloudera, Hortonworks or Mapr (preferred).
- Comfortable working in Linux environment.
- Experience with scripting - shell scripting, python etc.
- Profile and analyze large amounts of source system data, including structured and semi-structured data.
- Work with data originators to analyze the gaps in the data collected.
- Expert-level SQL coding/querying skills is a must.
- Conduct ETL performance tuning, troubleshooting & support.
- Must be comfortable working in a fast-paced, flexible environment, and take the initiative to learn new tools quickly.
- Strong understanding of Unix operating systems and Unix security paradigms
- Excellent communication skills and experience with first-level support processes
--
Please send me the suitable resumes along with contact details and rate on c2c of your consultants for the following positions.
Please add vishal@swarkysolutions.com to your hotlist distribution list
Data Developer / Integration Engineer on Hadoop
SAN JOSE, California
San Jose client seeks a Integration Engineer.
Data Management Developer / Integration Engineer with experience and strong working knowledge of Hadoop ecosystem and ETL.
We have a data ingestion software to onboard structured and unstructured data from multiple data sources including
Oracle, Teradata, MySQL & SQLServer. As we continue to grow and sign-on new customers, our data management tools and integration challenges become more important. As a data management developer / integration engineer, you will be working with
internal teams and customers to help on-board data from multiple source systems into our Hadoop DataLake platform.
In this role you will work heavily with JAVA Ingestion software for ETL, traditional databases like Oracle, Teradata and Hive data store on Hadoop platform. You will provide support for day-to-day Data Load activity as well as troubleshooting of Data Ingestion issues.
Additionally, the ideal candidate would have good understanding of Hadoop (MapR distribution preferred). Responsibilities related specifically to Hadoop platform support would include onboarding new clients to our Hadoop platform and helping them get acquainted to the numerous toolsets in the Hadoop ecosystem (like Hive, HUE, Drill, Spark, Impala, Kafka, etc).
What you need for this position:
- Experience with big-data technologies such as Hadoop/Hive, MongoDb, or other NoSQL-based data stores.
- Strong experience with traditional RDBMS systems like Oracle & Teradata.
- 3+ Years of JAVA ETL / Data Integration Experience.
- 2+ Years of experience working with Hadoop ecosystem - Cloudera, Hortonworks or Mapr (preferred).
- Comfortable working in Linux environment.
- Experience with scripting - shell scripting, python etc.
- Profile and analyze large amounts of source system data, including structured and semi-structured data.
- Work with data originators to analyze the gaps in the data collected.
- Expert-level SQL coding/querying skills is a must.
- Conduct ETL performance tuning, troubleshooting & support.
- Must be comfortable working in a fast-paced, flexible environment, and take the initiative to learn new tools quickly.
- Strong understanding of Unix operating systems and Unix security paradigms
- Excellent communication skills and experience with first-level support processes
--
Thanks &Regards,
Vishal,
Swarky Solutions Corporation |Plymouth, MN 55446|
Email : vishal@swarkysolutions.com
Web : http://www.swarkysolutions.com
--
Thanks &Regards,
Vishal,
Swarky Solutions Corporation |Plymouth, MN 55446|
Email : vishal@swarkysolutions.com
Web : http://www.swarkysolutions.com
You received this message because you are subscribed to the Google Groups " c2c jobs usa" group.
To unsubscribe from this group and stop receiving emails from it, send an email to c2cjobsusa+unsubscribe@googlegroups.com.
To post to this group, send email to c2cjobsusa@googlegroups.com.
Visit this group at https://groups.google.com/group/c2cjobsusa.
For more options, visit https://groups.google.com/d/optout.
No comments:
Post a Comment