Big Data Ops Engineer

What you get to work on:

As a Big Data Ops Engineer, you will be designing and maintaining a stable, scalable, and effective data infrastructure as well as providing tools enabling Data Scientists and Engineers to perform large-scale data mining and information retrieval.

  • Design and create a stable, scalable, and effective data architecture
  • Create and support a Hadoop cluster
  • Cluster management
  • Support data science and business intelligence
  • Support search, recommendation, personalization, and discovery engines
  • Support future data-driven products
  • Develop custom tools and automate processes
  • Constantly explore the state of the art to evolve and improve the data platform/architecture

Our technologies/languages:

Hadoop, AWS, Spark, MapReduce, SQL, Zookeeper, HBase, Flume, Kafka, Storm, ElasticSearch, Docker, Chef, Bash, Python, Java, Scala, Akka

The qualifications:

  • Bachelor’s degree in a related field (Computer Science) from an accredited four year university
  • The ideal candidate will have previously built large-scale distributed systems for storage, processing, and serving a highly available website
  • Experience setting up and maintaining a Hadoop cluster
  • Proficient in Bash, Python and Java and other commonly used languages in systems engineering
  • Knowledge of AWS and Linux
  • Knowledge of Spark, HBase, Flume, Kafka, MapReduce, and MySQL

* Max size 2MB
Post A Job