>

Job Summary


Job Type
Permanent

Seniority
Junior

Years of Experience
Information not provided

Tech Stacks
ETL
Shell
Hive
Spark
Flink
SQL
Hadoop
Python
Java

Job Description


Apply
1. Responsible for analyzing data requirements and provide data technology solution for business departments.
2. Responsible for data integration between the business system and big data platform, build data warehouse.
3. Responsible for business data ETL development, offline and real-time data development on the current big data platform.
4. Responsible for big data application monitoring, diagnosis, maintenance and data problem-solving.

Job Requirements:
1. BS, MS or PhD in Computer Science or other related technical discipline
2. No prior working experience is required.  
3. Familiar with common ETL tools, proficient in SQL writing and optimization, familiar with at least one of Java, Python, shell and other languages.
4. Familiar with essential Big Data framework including Hadoop ecosystem, spark, flink, etc. Good at Hive especially HQL programming and optimization.
5. Familiar with data model design in the data warehouse, master OLAP dimension modelling design method.
6. Have a strong sense of responsibility, goods skills in communication, coordination and teamwork spirit.
7. Experience in Internet trade, e-business and data analysis is preferred.
8. Fluency in Chinese is preferred to liaise with business partners in China

Salaries

There are no salaries from SHEIN that are similar to this job

View more salaries from SHEIN


NodeFlair Insights of SHEIN