Skip to content

Junior Data Quality Scientist (Remote) at the Coca-Cola Company

Share the Post:

The Coca-Cola Company (NYSE: KO) is the world’s largest beverage company, refreshing consumers with more than 500 sparkling and still brands. At The Coca-Cola Company you can cultivate your career in a challenging and dynamic environment. We are the largest manufacturer and distributor of nonalcoholic drinks in the world-selling more than 1 billion drinks a day. Unlock your full potential with a future-focused company that is known and respected throughout the world.

We are recruiting to fill the position below:

Job Title: Junior Data Quality Scientist

Location: Remote
Category: Finance
Contract: Permanent
Team: Data, Insights & Analytics (DIA)

Role Purpose

  • As DQ Scientist with Anomaly Detection (Data Error) and Remediation focuses in the Data, Insights & Analytics (DIA) team, you will be responsible for implementing and executing the business led data governance initiatives to hold tight on the quality of the upstream part of the data value chain via intelligent solutions, also with the End to End view and understanding.
  • You will need to  “Know the data on both sizes (business and technical)”: understand the business requirements of data quality and translate them to intelligent solutions;  “Do the hands-on”: implement the data quality detection and remediation through automation and intelligent;  and “Live with innovation”: understand the iterative nature of data projects with strong innovation mindsets to think “outside of the boxes” then try/learn/improve.
  • This work will require collaboration with multiple stakeholders and data users, including data curator, data engineers, data scientists, functional teams & functional data owners/data stewards, and other Data & Analytics leads part of DIA team.

Your New Key Responsibilities
In this critical role, you will work with our functional data owners and data stewards to ensure the quality of data for all of our data users, comprising of the following responsibilities:
Analysis and Translation:

  • Understand the purpose and usage of the data of specific use cases with solid Explorative Data Analytics skills (Data is not just numbers, it has many meanings. So you should read and talk about data like you are reading and talking about your favorite books)
  • Translate the business requirement from idea generation to realization and implementation with the strong Know-how of both sides of the data (Business and Technical) (Data is not just numbers, it is like languages, business talks data in the business way, tech talks data in the technical way, you must be bilingual)
  • Explore and try different innovation method to make the DQ process with more intelligence and automation (No one wants to live the world of many surprises (data errors), however, it is avoidable. can you detect them at the time on the place it happens without deploying an army of analysts and engineers?)
  • Work with a multidisciplinary team (Data Scientist, Insights Expert, Data Engineer, Data Curator, incl. Vendors) with strong problem-solving skills, e.g. Active listening, Research, Creativity, Communication, etc. (Yes, as a big organization, our data landscape is enormous with all kind of data roles you can image. If you are thinking “Go-big”, this is your “Home”)

Hands-on delivery:

  • Apply the suitable data science techniques (unsupervised, semi-supervised, supervised and/or smart rule engines) to solve the data problems, support ideations, early-stage PoC (You don’t ask a friend to do grid searching (hyperparameter tunning) for you or tell you which algo worth a try)
  • Lead the innovation and improvement of our current Data quality tools and solutions at scale (Just the successful PoC is not enough, can you take it to big?)
  • Lead as an example of Data Best practices and continue exploring innovation opportunities for improvements (No one living in a perfect world, but do you dare to challenge yourself and others to make things better, plus we are building solutions to check the quality of other’s works, so the quality and intelligence level of our works must be “sky high”)

Requirements
Are these your secret Ingredients?

  • Master’s Degree preferred or Degree emphasis in Computer Science, Engineering, or Data / Data Science related subjects with min 1-2 years of Data Science experience required
  • Proven track record of understanding business challenges and translating them into value-add and technically capable end solutions
  • A strong working knowledge and experiences in data management
  • Data Science: Unsupervised(- especially Anomaly Detection (Outlier Detection), Semi-supervised, Supervised learning
  • Algorithm: Isolation Forest, Clustering, Dimension reduction, Word vectors/Embeddings, Classification and Regression
  • Data wrangling and engineering: ETL e.g. Python, SQL, PySpark
  • Data platform: SQL DB, Datalake, Jupyer notebook/IDE, Databricks
  • Feel free to share with us your own Github or other code repository link if possible
  • Understand Agile Product Delivery (SAFE framework is preferred)
  • Previous experience working with CPG sector and its core datasets is a big plus.

Application Closing Date
Not Specified.

Method of Application
Interested and qualified candidates should:
Click here to apply online

Share the Post:

Related Posts