>

Job Summary


Job Type
Permanent

Seniority
Senior

Years of Experience
3-7 years

Tech Stacks
ETL
Amazon S3
Rest API
Docker
Oracle
API
Sisense
Superset
Shell Script
Analytics
Spark Streaming
Qlik
Flow
EMR
CI
API Gateway
Athena
RedShift
ELK
pySpark
Snowflake
HBase
Segment
Shell
Apache
Hive
Spark
NoSQL
Airflow
Kubernetes
kafka
SQL
PostgreSQL
Scala
Redis
Cassandra
Hadoop
MySQL
Python
AWS
Java

Job Description


Apply
We are constantly looking for the best technology talent out there to build our next gen Products and to contribute to our proprietary and patent pending Lightspeed and RemitSend platform.

Responsibilities:

  • Design, Build, deploy scalable and robust data pipeline/solutions which ensures data governance by implementing data validation, quality checks, classification and central data management systems.
  • Collaborate with internal systems to extract, transform, and load (ETL) data from disparate data sources in a microservices architecture (Batch jobs, Near real time reporting, Real time dashboard analytics).
  • Provide assistance to internal stakeholders on streamlining regulatory requirements, automate reporting framework and be a central point of contact for all technical/functional resolutions required in reporting requests by regulatory bodies.
  • Automate the existing reporting flow, by building a central reporting framework which can cater to entire range of internal/external stakeholders.
  • Build, restructure and revamp data warehouse towards highly scalable and well-integrated, reliable data ecosystem which can support full-fledged decision making, reveal actionable insights through business intelligence, act as a platform for predictive analytics and cater to internal management executives and external customer.
  • Engage with internal and external stakeholders for requirements gathering and providing timely inputs to them in matters pertaining to technical and data system issues.
  • Should propose new ideas to implement best practices in data and identify opportunities which leverages data to create innovative solutions.
  • Provide end-to-end solutions in data platform and propose/implement data product ideas. Take ownership of data deliverables and work on entire lifecycle of SDLC starting from requirement gathering, development, unit testing and deployment.

Requirements:

  • Exposure to atleast one scripting langauage out of Python, Spark, Scala or Java.
  • Complex SQL writing and RDBMS working experience is mandatory.
  • Should have working knowledge of workflow scheduling and ETL pipeline.
  • Experience in writing scripts using python and extensive knowledge of python libraries.
  • Comfortable writing advanced and complex SQL queries.
  • Extensive understanding of relational databases like MySQL, PostgreSQL, Oracle.
  • Working knowledge of Apache Airflow.
  • Should have worked on columnar databases like Amazon Redshift/Snowflake/HBase.
  • Foundational experience in setting up of Data Lake and Data Warehouse.
  • Knowledge in AWS services like S3, Lambda, API Gateways, Athena, Redshift etc.
  • Experience building both real time pipelines using technologies like Kafka, Spark Streaming.
  • Hands-on experience in creating batch data pipelines using Spark/PySpark, Python, Scala or similar data technologies.
  • Strong ETL fundamentals and expertise in Data Consistency and Data Governance.
  • Experience in any BI tool: Tableau, Zoho Analytics, Qlik, Superset, SiSense etc.
  • Familiar with CI/CD fundamentals and SDLC variants.
  • Working Knowledge of ML models like Random Forest, Decision Tree, Clustering, Regression models.

Good to have:

  • Fintech Experience (regulatory and compliance exp).
  • Basic understanding of Docker/Kubernetes orchestration and containerization.
  • Shell Scripting.
  • Knowledge of Devops and Cloud Engineering.
  • Experience in deploying data lake solutions using Hadoop, Hive, Spark etc.
  • Amazon EMR and heavy load compute.
  • Knowledge of NoSQL Data stores like Mongo, Cassandra, Redis etc.
  • ELK stack or Graylog.
  • REST API and Web Semantics.

Culture at MatchMove:

We at MatchMove are all singularly focused on building the next generation of embedded Banking-as-a-Service, for all types of organizations, from every segment of the economy. Our performance-based culture is inclusive, results-driven and customer-centric, with diverse talent from across the globe coming together as a team every day to create new user experiences and innovations for our customers.

Salaries

There are no salaries from MatchMove that are similar to this job

View more salaries from MatchMove


NodeFlair Insights of MatchMove