Press Enter

PROJECT TITLE

    Press Enter

    NOTIFICATIONS

    receipt
    Software Development Engineer - Kafka at Amadeus Nice (Biot, France)
    Amadeus Nice Employer
    Biot, France
    Job Type
    Job Location
    Full Time
    Biot, France

    Job Description:

    Careers Site: Software Development Engineer - Kafka (78388)











    If you could change one thing about travel, what would it be? At Amadeus, you can make that happen!


    Travel makes the world a better place and we are fully dedicated to improving it and making it even more rewarding.  We are one of the world’s top 15 software companies: we provide technology solutions and services within the travel industry.


    Do you have ideas on how to improve travel for everyone?  Do you find the idea of working in a diverse, multicultural environment exciting? Are you ready to make an impact across the world? Great, then join us! Let’s shape the future of travel together. #[email protected]



    Business environment



    The emergence of Big Data technologies is an opportunity to improve our existing products, and create a brand new generation of data driven applications. This is recognized in the company through the Travel 360 program that identified dozens of applications across the organisation that will implement these technologies.



    The BIP (Business Intelligent Platform) department is playing a key role inside the company to provide reliable, secured and efficient data storage and data processing solutions. The group is responsible for :



    • Big Data processing and analytics platform

    • Reporting and Visualization solutions

    • Machine Learning libraries



    The BIP Department is structured into two main areas: one dedicated to DevOPS activities (BOX), one dedicated to Development Experience activities (BDX).


    Within this department TPE-CPM-DMM-BIP-BDX (λbox Development eXperience) team is in charge of defining the Big Data Platform (internally named the λbox) focusing on facilitating development of analytic applications.



    Purpose



    This job position corresponds to the activities around the integration of Apache Kafka inside Amadeus. Apache Kafka is a scala based solution chosen to implement a high performance distributed message queue. Kafka’s specificity is to move a significant part of the complexity of regular queuing systems to the consumers’ implementation, providing a simple distributed log management, with a mechanism to store offsets and distribute consumers on the partitions. Kafka is part of broader ecosystem, which can be used in multiple usecases. Kafka’s integration with Spark Apache project makes a very interesting technology for Big Data streaming usecases. Kafka’s integration with the Reactive Streams frameworks (now part of the Java 9 spec) makes its very interesting for the development of Event Driven micro-services. Supported by Confluent, Kafka comes with an eco-system of libraries (K-Streams / K-SQL) to provide a higher level distributed computing framework on top of Kafka.             



    Key accountabilities



    The candidate will be accountable for:



    • Automation of the operations of the Apache Kafka clusters: Maintaining the current configuration management framework, and participate to the migration of the deployment to Amadeus’ new standards (Ansible / OpenShift). Define and implement the monitoring framework and alerting rules. Create and maintain the tools to automate the functional setup of the cluster (lifetime and setup of cluster and topic configuration).

    • Supporting the Kafka clusters: Participate to the oncall rota of the Kafka test and production infrastructure.

    • Investigating Kafka limitations/problems: Jointly with our users, the candidate will investigate Kafka correctness/performance limitations and in case of confirmed problem, the candidate will open tickets and follow the resolution with the Kafka community.

    • Contributing to the Apache Kafka project: To speed up the resolution of some key problems, the candidate will contribute reproducers, design solutions, code, documentation and test cases to the Kafka community.       



    Education




    • Post-secondary degree in Computer Science or related technical field or equivalent experience

    • English fluent.  



    Specific competencies



    Technical Skills



    • Experience with Configuration Management frameworks (Ansible).

    • Experience with scripting programming languages (python).

    • Experience with OpenShift or Kubernetes is a plus.

    • Knowledge of Open Source development tooling (github, gradle) is a plus.

    • Experience with Scala or Java or C++ programming is a plus.

    • Experience working with Kafka is a plus.

    • Experience working with the Open Source community is a plus.

    Benefits:

    Certificate
    Flexible Hours
    Letter of Recommendation
    Note : This project is an external project, and it was posted on the platform by the Gradbee Team. We curate all the internships available across the internet by visiting company websites, and social networks like Facebook, LinkedIn, WhatsApp, Twitter etc. If you are the owner of this internship / project and need to get it removed, kindly mail us at [email protected]