The FICC Platform Engineering team helps drives the divisions platform strategy by partnering with both core engineering and engineers in the front office (and by platform, we mean compute, runtime, networks, databases etc.). It achieves this at scale through leveraging data, APIs and automation to give us oversight on thousands of distributed components that allow the FICC franchise to run its business and serve its clients. We achieve this through harvesting various heterogeneous platform APIs in the firm into a consolidated data platform. This platform contains hundreds of unique data sets that provide transparency on many of the key parts of the our distributed runtime. This allows engineers within the division to get transparency on their strategic platform, whilst also providing transparency on the areas that are deemed legacy and incurring both friction and cost to ongoing strategic endeavours. The team consists of application developers who are very data science oriented with a passion for designing APIs and data sets that faithfully model various aspects of our distributed runtime. The team has aspirations to extend some of their data offerings to encompass machine learning to gain greater value from the investment in our data. This is quite a nascent area of innovation where we have seen significant value being yielded through such investment. Our investment in modelling platforms through data, has allowed us to oversee and manage an environment spanning the entire globe totalling hundreds of thousands of cores of compute across on premise, exchange colocations and public cloud service providers. This is a unique opportunity to work very closely with systems spanning the entire Fixed Income, Currencies and Commodities business, and leverage automation to streamline management of this large distributed eco-system.
RESPONSIBILITIES AND QUALIFICATIONS
Job Summary
- A data first mind set with a passion to innovate through harnessing data
- Analysing data sets to identify anomalies and codify ways to extract meaningful signals and conclusions from large (potentially noisy) data sets
- Work constructively and in collaboration with other engineers to design and architect new and innovative data pipelines to improve the way we operate and manage our distributed runtime
- Partner with software and infrastructure owners to solve software and hardware issues through the data we generate
- Ability to drive platform projects and initiatives that span across engineers throughout the division
- Help mould and forge platform management blue prints for the division to adopt
- Manage work to balance the short-term needs of the business, but also placing a significant emphasis on long term strategic goals.
Basic Qualifications
- Strong academic background in Computer Science or an analytical field such as Mathematics, Physics, Engineering, etc.
- Upto 3 years or more of relevant work experience
- Strong knowledge of python and comfortable working with large tabular and structured data sets
- Excellent analytical skills with strong SQL and working knowledge of RESTful APIs
- Solid communication and interpersonal skills
- Ability to multi-task and prioritise tasks effectively
- Ability to quickly understand new systems and technologies
- Working knowledge of Linux operating system and know your way around the terminal
Desirable Qualification
- Experience working with Elasticsearch and Kafka
- Comfortable working with Gitlab SDLC
- Worked with python data science libraries such as numpy, Scipy or pandas
- Worked with container orchestration environments such as Kubernetes
- Worked with Cloud Native solutions in AWS and/or GCP (e.g. Google Big Query)
- Worked in a DevOps / SRE capacity