Job Description We are looking for a Big Data Engineer that will work on the collecting, storing, processing, and analyzing of large sets of data.
The primary focus will be on choosing optimal solutions to use for these purposes, then maintaining, implementing, and monitoring them.
You will also be responsible for integrating them with the architecture used across the company.
Skills and Qualifications
• Proficient understanding of distributed computing principles
• Production coding experience required. (Java or Python or Scala)
• Management of Hadoop cluster (Cloudera preferred), with all included services
• Ability to solve any ongoing issues with operating the cluster
• Proficiency with Hadoop v2, MapReduce, HDFS, Sqoop
• Experience with building stream-processing systems, using solutions such as Storm or Spark-Streaming
• Good knowledge of Big Data querying tools, such as Pig, Hive, and Impala
• Experience with Spark
• Experience with integration of data from multiple data sources such as MsSQL Server, Oracle,
• Good understanding of SQL queries, joins, stored procedures, relational schemas
• Experience with NoSQL databases, such as HBase, Cassandra, MongoDB (preferred)
• Knowledge of various ETL techniques and frameworks, such as Flume
• Experience with various messaging systems, such as Kafka or RabbitMQ
• Experience with Cloudera FS domain knowledge a big plus but not required
|Salary||0 to 0|
|Years of Experience ||5+ to 10 years|
|Minimum Education ||-|
|Willingness to Travel||-|
|Hours per week||0|