Long business trip to USA
- Experience with Hadoop stack (HIVE, Pig, Hadoop streaming) and MapReduce
- Experience with Hbase or comparable NoSQL.
- Strong grasp of algorithms and data structures
- Database experience with MySQL, MSSQL or equivalent
- Proficient in two of the languages: Java, Python/Jython, Scala, Clojure, Ruby, C++
- Experience with Test Driven Code Development, SCM tools such as GIT, SVN, Jenkins, Gerrit code review
- Good familiarity with in Linux/Unix, scripting and administration
- Experience in Spark MLLib, Mahout, Elasticsearch, TSDB
- Familiarity out data formats and serialization, XML, JSON, AVRO, Thrift, ProtoBuf
- Experience with graph frameworks, such as Giraph, Hama, GraphLab, GraphX
- Experience with R and/or MatLab
- Strong communication skills
- Read Tom White's "Hadoop: the Definitive Guide" and Jimmy Lin/Chris Dryer’s “Data-Intensive Text Processing with MapReduce”
- Build and maintain code to populate HDFS, Hadoop with log events from Kafka or data loaded from SQL production systems.
- Design, build and support pipelines of data transformation, conversion, validation
- Design and support effective storage and retrieval of BigData >500Tb
- Assess the impact of external production system changes to Big Data systems on Hadoop or Spark and implement changes to the ETL to ensure consistent and accurate data flows.
- Design and implement best practices for cloud based cluster deployments of Hadoop, Spark, and other BigData eco-system tools.
- Unique working environment where you communicate and work directly with client
- Usage of the latest technologies and tools
- State of the art, cool, centrally located offices with warm atmosphere which creates really good working conditions
- High salary