Working at Tiger Analytics, you’ll be at the heart of this AI revolution. You’ll work with teams that push the boundaries of what-is-possible and build solutions that energize and inspire. We are looking for a data engineer to be based out of our Singapore office. As a big data engineer you will be responsible to
- Develop big data solutions for near real-time stream processing, as well as batch processing on the Big Data platform.
- Analyse problems and engineer highly flexible solutions
- Set up and run BigData development Frameworks like Hive, Sqoop, Streaming mechanisms, Pig, Mahout, Scala, SPARK and others.
- Working experience on Big Data services on the cloud, preferred Azure (ADF, ADLS, Blob Storage, Azure SQL WH, etc)
- Work with business domain experts, data scientists, and application developers to identify data relevant for analysis and develop the Big Data solution
- Coordinate effectively with team members in the project, customer and with business partners
- Adapt and learn new technologies surrounding BigData ecosystems
- Take the initiative to run the project and gel in the start-up environment.
Required Experience, Skills & Competencies
- Minimum 5 years of professional experience with 2 years of Hadoop project experience.
- Experience in Big Data technologies like HDFS, Hadoop, Hive, Pig, Sqoop, Flume, Spark, etc.
- Experience working on the cloud environment, preferred on Azure
- Must-Have core Java experience or advance java experience.
- Experience in developing and managing scalable Hadoop cluster environments and other scalable supportable infrastructure.
- Familiarity with data warehousing concepts, distributed systems, data pipelines, and ETL.
- Good communication (written and oral) and interpersonal skills.
- Extremely analytical with strong business sense.
- Experience in NOSQL technologies like Hbase, Cassandra, MongoDB (Good to have)