Ericsson currently holds a unique position to drive change in an all connected world. How we grow and succeed long-term depends on our ability to change, transform and adapt to the world around us. We believe change is driven using disruptive technology, with an emphasis on speed and ease of use to accelerate productivity and innovation. 5G has the potential to change the world, further powering the hottest trends in tech today: IoT (Internet of Things), AI (Artificial Intelligence) and AR (Augmented Reality). Ericsson is at the heart of this development and is shaping this future.
We are currently looking at investing in key competencies in a few selected areas. We want to grow our team within cloud, microservice, cyber security and the next generation network competences. We are establishing Microsoft, Google, Amazon, and Microservices teams. All changes aim to increase our efficiency in developing and releasing new capabilities and services, and create an organization characterized by expert knowledge, business value outcome and innovation.
Ericsson has developed a custom-built platform for data-intensive applications which can be used for AI, machine learning, data mining, real-time processing, or data storage and retrieval on a massive scale.
The platform includes a range of services for compute, storage and data pipeline development that run on bare-metal as well as public cloud(s). The platform services can be accessed by software developers, data scientists, analysts and other enterprise IT professionals over the Ericsson internal network. The platform uses enterprise-class security for safeguarding massive amounts of commercially sensitive data.
The platform is currently in use across all the Ericsson business areas and are enabling new business areas and/or business models for Ericsson.
We are now looking for an Information and Communication Technology (ICT) Architect in the realm of Data Engineering. You will convert requirements into new data pipelines and optimizing our data and data pipeline architecture, as well as design and optimizing data flow and collection for cross functional teams. You will interact with internal stakeholders and external customers to define and provide solutions improving their competitive position. Finally, you will act as an internal consultant for the many IT areas.
The ideal candidate is an experienced data pipeline builder who enjoys optimizing data systems and building them from the ground up. The Data Engineer will support our software developers, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. You must be self-directed and comfortable supporting the data needs of multiple teams, systems and products.
- Assemble large, complex data sets that meet functional / non-functional business requirements.
- Build the cloud infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and ‘big data’ technologies.
- Build tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
- Implement data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
- Keep our data secure across national boundaries through multiple data centers and regions.
- Work with data and analytics experts to strive for greater functionality in our data systems.
- Work with stakeholders including Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
- We are looking for a candidate with 5+ years of experience in data engineering, who has an academic degree: MSc, MBA or equivalent.
- SQL knowledge and experience working with relational databases, as well as familiarity with a variety of databases.
- Experience building ‘big data’ pipelines, architectures and data sets.
- Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores.
- Strong analytic skills related to working with unstructured datasets.
- Ability to communicate clearly and concisely across multiple audiences and partners, including the ability to explain analytical outcomes and technical roadblocks in business terms.
- Experience supporting and working with cross-functional teams in a dynamic environment.
The candidate should also have experience using the following software/tools:
- Experience with Kubernetes and microservices application architecture.
- Experience with public cloud services: AWS, GCP, and Azure.
- Experience with big data tools: e.g. Spark, and Kafka.
- Experience with relational SQL and NoSQL databases, e.g. Postgres, and Cassandra.
- Experience with data pipeline and workflow management tools: e.g. Luigi, and Airflow.
- Experience with stream-processing systems: e.g. Storm, and Spark-Streaming.
- Experience with object-oriented/object function scripting languages: e.g. Python, Java, C++, and Scala.
The selection and interview process is ongoing. Therefore, send in your application in English as soon as possible. For any questions and clarifications reach out to the Senior Recruiter, Valentyna Ivanova - firstname.lastname@example.org