We are seeking for an enthusiastic and inquisitive team to join our growing organization. In this position you will be given many opportunities to pick up and learn about location data, get your hands dirty from working with big data technologies, creating robust data pipelines and big data engineering solutions. You will also get chance to perform analysis on location data. The more curious you are, the more you will learn with us.
You’ll be working closely with all facets of the team and learning about the overall data economy.
Requirements and Qualifications
- 2-5 years of experience in data engineering, analytics or data science background.
- Strong programming skills in Java/Python, excellent SQL skills
- Experience with relational SQL and NoSQL databases.
- Expertise in AWS services EMR, RDS, Athena, S3,Data Pipeline, Redshift
- Experience with big data tools: Hadoop, Spark, Kafka
- Experience with data streaming systems like spark-streaming, storm.
- Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
- Experience with shell scripting
- Ability to quickly understand and appreciate underlying business context, problems and objectives of analytical projects
- Clear communication skills to run well defined analysis and produce reports
- Excellent time management skills
Duties and Responsibilities
- Create and maintain optimal data pipeline architecture
- Identify, design, and implement internal process improvements: automating manual processes, optimising data delivery, re-designing infrastructure for greater scalability
- Build analytics tools that utilises the data pipeline to provide actionable insights into customer delivered data, operational efficiency and other key business performance metrics.
- Optimise and improve our existing data products.
- Direct interaction and learning from an experienced team
- Learning Big Data stack, data monetisation and business model of a data marketplace
- Flexible working hours, work from home