You are viewing a preview of this job. Log in or register to view more details about this job.

Hadoop Engineer

Position Description: Perform data modeling in RDBMS (transactional, data warehousing) on MSSQL, Oracle, Sybase ASE as well as Hadoop data modeling & design. Create and manage frameworks for data management, processing, and analytics. Develop and manage applications and frameworks within the Hadoop ecosystem (Yarn, HDFS, Spark, Hadoop, Impala, Kafka, Kudu, etc). Create HDFS file system design using techniques, format, and layout for optimal performance. Develop applications in spark and use scala or java to support analytics, predictive, and ETL/ELT. Develop, deploy, and monitor application and server performance in developed applications (yarn, map reduce, spark). Ensure Performance Tuning of Map Reduce, Spark and other applications within Hadoop. Manage Data Analytics and Data Visualization such as Tableau, PowerBI, Qlik. Design and develop applications utilizing data streaming such as Spark, Kafka and Flume. Ensure Machine Learning and AI Skills in Scala and Python in spark. Engage in Statistical Analysis, Machine Learning, Data Mining, Tableau, Hadoop, R, BigQuery and TensorFlow. Deliver Hadoop solutions using the Hadoop ecosystem. Work in data analytics, data warehousing or similar. Design enterprise data modeling concepts, ETL, and advanced SQL programming languages such as Scala/Java in Spark using data frames and RDD. Design, deploy, and monitor Spark applications.
Position Qualifications: Applicant must possess a Bachelor's or foreign equivalent degree in Computer Science, Statistics, Applied Math or in a related field and 1 year of professional experience developing and supporting Hadoop Ecosystem projects with a strong emphasis on data architectures leveraging Spark, Hive and HDFS as part of an enterprise level data analytics team or in a related position. Additionally, the applicant must have professional experience in: 1.) 1 year of experience in Enterprise Data Modeling Relational: Relational Database experience (ex. Oracle, SQLSERVER, Sybase DBMS); 2.) 1 year of experience in HDFS Design: HDFS file system design using techniques, format, and layout for optimal performance; 3.) 1 year of experience in Hadoop Ecosystem Components:  Advanced understanding of proper utilization, management and design of Hadoop applications for data management, processing, and analytics within the Hadoop Ecosystem. (HDFS, Spark, Hadoop, Impala, Kafka, etc.).; 4.) 1 year of experience in Hadoop Batch Application Development: Develop and manage spark framework based applications within the Hadoop Ecosystem using Scala or Java development languages; 5.) 1 year of experience Hadoop Streaming Application Development: Design and develop applications utilizing data streaming such as Spark, Kafka and Flume; 6.) 1 year of experience Hadoop Application Performance: Deploy applications and monitor applications and/or server performance. Focus on performance monitoring and apply tuning techniques of Map Reduce, Spark and other applications within Hadoop; 7.) 1 year of experience Data Visualization: experience with Data Analytics and Data Visualization tools similar to Tableau, Power BI, Qlik; 8.) 1 year of Data Analytics experience: Work in data analytics, data warehousing or similar; and 9.) 1 year of experience Query Languages: Experienced with advanced SQL programming and Spark SQL using data frames.
Position Location: Sussex, WI
Compensation: $59,488 - $89,174.52 / year
Desired Majors: Computer Science, Statistics, Applied Math or in a related field