We are looking for a Database and Big Data Engineer who can find innovative and creative solutions to tough problems. As a Big Data Engineer, you’ll create and manage our data infrastructure and tools. You’ll evaluate the optimal design and architecture for different use cases of data stream, storage and processing, including to drive the implementation. This role will also be responsible for the integration between the data platform with different application frameworks and platforms within the organization’s Industry 4.0 digital solution ecosystem, as well as supporting exploration of new technology and innovation solution from data communication, storage and retrieval perspective.
• Works with application development team, business unit process experts, and outsource technology partner to design data stream, storage, retrieval and analysis solutions
• Evaluate and implement Big Data tools and frameworks, software and hardware required to provide relevant data platform capabilities
• Develop data integration solution for IT and OT technology develop by Innovation Hub.
• Evaluate and implement data ETL tools and establish development, test and deployment process and governance
• Review solution performance, fine tune and advise necessary infrastructure configuration update or upgrade
• Ensure compliance of data management, access control and usage with organization data governance policy.
• Design and implement data model and integration development process.
• Support and provide technical advice for exploration of new technology, POC development of new IT and OT solutions, data analytics, and AI/ML development.
(Education and Work experience that a candidate should have when applying for position)
Minimum Education required (specific field or equivalent):
• Bachelor degree in a technical discipline, with emphasis on Computer Science/ Computer Engineering/Statistics
• A passion for massive data, data protocols, data analytics
Minimum years of experience in role:
• Proficient understanding of distributed computing principles
• 2 - 5 years experiences Databases platforms (i.e. Relational Databases, Hadoop clusters, HBase, Cassandra, MongoDB, Datastreams, apis)
• 2 - 5 years experiences with building stream-processing systems, using solutions such as Storm or Spark-Streaming
• Experience with Big Data querying tools, such as Pig, Hive, and Impala
• Experience with Spark
• Experience with messaging systems, such as Kafka or RabbitMQ
• Experience with Big Data ML toolkits, such as Mahout, SparkML, or H2O
• Experience with Cloud Big Data platform
• Good understanding of Secure Software Development, secure code quality control, and application and system integration vulnerability assessment.
• Good understanding with Application Development and Software Assurance in a highly regulated industry
• Good understanding of smart factory analytics solution such as PTC smart factory framework (ThingWorx, Kepware).
• Good understanding of Lambda Architecture, along with its advantages and drawbacks
• Great individual performer as well as contributor in a team
• Fresh graduate with highly self- motivated personality are welcome to apply as well.
Preferred Skills (if any):
• Demonstrated Excellent level of analytical ability, communication and interpersonal skills required to build relationships with team members to solve problems and resolve issues.
• Experience with Big Data development tools
• Familiar with Software Development
• Familiar with deep learning and computer vision domains (object classification/detection/segmentation, video analytics, text detection/OCR and etc.)
• Familiar with deep learning frameworks (e.g Tensorflow, PyTorch, Keras and etc.)
• Familiar with smart factory analytics solution such as PTC smart factory framework (ThingWorx, Kepware).