Careers

Data Engineer

Updated on
July 18, 2023
We are looking for a skilled data engineer who has expertise in developing and architecting data pipelines. The candidate will be responsible for designing, building, and maintaining efficient, scalable, and reliable data infrastructure.

Why KeepFlying®

KeepFlying® is an Aviation DSaaS (Data Science as a Service) platform which will serve Airlines, Lessors, Financiers & OEMs simulate revenue potential of their assets using financial and risk models. KeepFlying® will bridge the gap between Technical & Engineering data with that of Finance & Risk data to help value assets and their expected revenue potentials over their remaining useful lives.
• Fast growing and well-funded start up
• Standout Product – Be a part of first of its kind solution offered in the Aviation Industry.
• Learning – You will learn from a group of proven leaders and innovators.
• Flexibility - Our engineers enjoy the utmost flexibility as we believe in judging by the output and not by the hours worked.
• Innovative mind-set - Our ecosystem gives an ample opportunity to showcase your enterprising &innovative ideas, knowledge and skills that directly contribute to the success of our company.
We are backed by Marquee Investors and Industry Experts. Join us on our data to dollar journey with a kickass and fun team.

Job Description

If you have a passion for building and optimizing data pipelines, and enjoy working with a team of skilled professionals, we encourage you to apply for this position.

 

The candidate should have a solid understanding of Databricks and its various components to be able to design, build, and optimize data pipelines on the Databricks platform. They should be able to leverage Databricks notebooks and clusters to develop ETL processes and perform data transformations. They should also be familiar with Databricks SQL and Databricks Delta for querying and managing data on Databricks.

In addition to the programming languages and big data processing frameworks mentioned earlier, the candidate should also have experience working with Databricks APIs and SDKs to automate various aspects of Databricks workflows. This could include automating cluster provisioning, job scheduling, and workflow orchestration using tools like Python and Apache Airflow.

Overall, the ideal candidate should be a well-rounded data engineer with expertise in developing and architecting data pipelines, as well as specific experience working with Databricks.

Responsibilities

·       Design, build and maintain data pipelines for various data sources and destinations

·       Architect data pipelines for scalability, reliability and performance

·       Develop ETL processes to integrate data from multiple sources

·       Implement data quality checks and monitoring for data pipeline health

·       Work with data analysts and data scientists to provide them with clean, reliable data for analysis

·       Collaborate with other teams to integrate data across multiple systems

·       Optimize and tune the performance of data pipelines

·       Automate the deployment and management of data pipelines

·       Experience working with Databricks, including knowledge of Databricks notebooks, clusters, jobs, and workflows

·       Familiarity with Databricks data engineering best practices and optimization techniques

·       Knowledge of Databricks SQL, Databricks Delta, and Databricks ML flow

·       Experience with integrating Databricks with other data processing systems and tools

Requirements

·       Bachelor's degree in computer science, Information Technology, or related field

·       Minimum of 3 years of experience as a data engineer

·       Strong experience with programming languages like Python, Java, and Scala

·       Experience with big data processing frameworks like Apache Spark, Hadoop, or Flink

·       Strong knowledge of SQL andNoSQL databases

·       Experience with cloud-based data processing services like AWS Glue, Azure Data Factory, or Google Cloud Dataflow

·       Familiarity with data modeling and data warehousing concepts

·       Experience with source control systems like Git

·       Strong analytical and problem-solving skills

·       Excellent communication and collaboration skills

 

PREFERRED QUALIFICATIONS

·       Master's degree in ComputerScience, Information Technology, or related field

·       Experience with distributed dataprocessing frameworks like Apache Kafka or Apache Storm

·       Familiarity withcontainerization and container orchestration systems like Docker and Kubernetes

·       Experience with datavisualization tools like Tableau, Power BI, or QlikView

Recruitment Process

Apply now

Submit your resume / cover letters to careers@cbmmgroup.com
We never share your details with third parties.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.