Logistics Matters: What role does a data engineer play in an organization like DB SCHENKER?

Thomas Battenfeld: Data engineering might sound like a tech-heavy field, but it is fundamentally about preparing and making data usable for various business needs. We are the ones embedding the analysis done by data scientists, operations research experts, and business analysts into the DB SCHENKER System landscape and making them usable for our employees. Moreover, my team and I ensure that the used data is organized, clean, and easy to consume. We also make sure to automate processes and data pipelines. You could say that such “data management” is the cornerstone of efficiency. This is especially true for a company like DB SCHENKER, where timely decisions are pivotal. We as data engineers manage and organize data to ensure it is accurate, timely, secure, and available.

Together with my data engineer colleagues, we take care of the building and maintenance of systems; you could call us the architects and builders of the company’s data infrastructure. My job is to design and build robust systems to collect, store, organize, and process large volumes of data. These systems enable the company to handle its data effectively, ensuring that daily operations run smoothly and efficiently.

Let’s consider a hypothetical scenario where DB SCHENKER wants to optimize its truck delivery schedules. Data engineers can create a basic system that collects data from the trucks’ GPS devices and delivery logs. Such a system could track when each truck departs, how long each delivery takes, and when the truck returns to the depot. By analyzing this data, the system could identify patterns and suggest more efficient schedules. If data shows that certain routes are consistently slower during specific times of the day to traffic, the system could recommend rescheduling those deliveries to avoid delays. Such straightforward use of data helps DB SCHENKER optimize delivery schedules and maintain its fleet more effectively, leading to timely deliveries and reduced operational costs.

In essence, data engineers are vital in ensuring that DB SCHENKER can use its data effectively, supporting everything from everyday decision-making to advanced AI technologies. We are maintaining the data “plumbing” of the company, ensuring it is robust and capable of supporting both current operational needs and future innovations. We enable the company to make informed, data-driven decisions.

Logistics Matters: How does Data Engineering integrate with other teams at DB SCHENKER?

Thomas Battenfeld: Our data engineering team at DB SCHENKER does not operate in isolation. It is part of a synergistic team that includes Business Consultants, Data Scientists, Operations Research Experts, and Engineers. Together we are bringing advanced analytics and AI into our logistics operations.

Moreover, data engineering at DB SCHENKER is part of a larger engineering ecosystem, which includes several specialized teams. The Software Engineering experts develop customized frontend and backend solutions for various applications, enhancing user interfaces and backend processing capabilities. Another development we take care of is the data entry application that ensures data is captured efficiently and accurately, feeding into our central systems without errors.

This collaborative environment ensures a seamless flow of information from the moment we receive it until we use it in our advanced systems, for example in AI applications. It’s a team effort that allows us to use the information effectively, not just store it away. We make sure it helps us make smart decisions and improve our services.

Logistics Matters: Can you discuss your experience in designing and implementing data infrastructure solutions as a data engineer, specifically looking at automation technologies at DB SCHENKER?

Thomas Battenfeld: As mentioned before, one of the primary focuses in data engineering is creating stable and scalable data systems capable of handling immense volumes of data. The importance of building such systems cannot be overstated – they are the backbone that supports all our data-driven initiatives. These systems must not only manage current data volumes but also scale effectively as the company grows and data demands increase. Ensuring stability and scalability means that our operations can continue seamlessly, even as we integrate more complex technologies and handle increasing amounts of data, such as AI applications.

We harness the power of Microsoft Azure, utilizing its cloud computing capabilities to develop a robust and scalable data infrastructure. Azure offers a comprehensive suite of services that are crucial for building sophisticated data systems, including computing power, data storage, and networking solutions. Among these, Azure Data Factory stands out as a vital component of our automation strategy, enabling the orchestration and automation of data movement and transformation across diverse sources and destinations.

Our implementation of automated workflows within Azure Data Factory streamlines our data extraction, transformation, and loading (ETL) processes. These workflows aggregate data from varied sources, such as on-premises databases and cloud services, standardize it, and consolidate it in our centralized data lake hosted on Azure. This automation enhances operational efficiency and maintains high data integrity and consistency across our systems. With the data centralized and organized, it becomes a key asset for advanced analytics and artificial intelligence initiatives, empowering our data science and operations research teams to build predictive models, optimize logistics, and improve decision-making. Azure’s scalability additionally allows us to adjust computing resources based on demand, ensuring cost-efficiency and uninterrupted operations, thus providing a strong foundation for both current needs and future expansion.

Logistics Matters: What approaches do you take as a data engineer to make complex data systems accessible and actionable for all users at DB SCHENKER?

Thomas Battenfeld: Another key part of my role is to ensure that our data is not just available but also easily understandable and useful for everyone in our organization, from warehouse staff to management. To make data as accessible as possible, we use tools like Power BI and Power Apps. These platforms are particularly transformative because they enable us to quickly create and test ideas, develop prototypes, and build minimum viable products (MVPs) that can be redefined based on user feedback. For example, Power Apps; allows us to build custom applications tailored to the specific needs of different departments without extensive coding, making essential data available for decision-making processes.

To make it more tangible, let’s say we want to improve how warehouse staff tracks shipments. Using Power Apps, we can quickly build a custom app that allows staff to scan barcodes with their mobile devices. This app pulls data from our central system and instantly shows the current location, status, and history of each shipment. To make data even more accessible, we use Power BI to create interactive dashboards for management. These dashboards visually display key metrics like shipment times, delivery performance, and inventory levels. Managers can see immediately how operations are performing and can drill down into the details if needed.

Continuous improvement is one of the core principles in our approach to data management. We actively establish feedback loops with our users to understand how well our data-driven tools and interfaces meet their needs. This feedback is invaluable as it guides our development process, helping us to refine our solutions and ensure they truly enhance productivity and user experience. By using these tools and focusing on the needs of our users, we ensure that data is not just available but genuinely useful for everyone at DB SCHENKER. Our commitment to continuous improvement and user feedback ensures that our data systems remain effective and adaptable as our operations evolve.

Logistics Matters: As someone who works closely with automation processes, how can automation improve our daily work activities and benefit our workflows?

Thomas Battenfeld: As a data engineer heavily involved in automation, I see daily how automation helps streamline our work. It simplifies many of the repetitive tasks we used to do manually, allowing us to focus on more complex issues that require deeper problem-solving skills.

Automation of data entry and shipment processing means we can concentrate on improving customer service and refining our operational tactics. This is important because it allows us to utilize our skills in areas that benefit most from human judgment, such as strategic planning. Automation also helps us reduce errors, especially those that can happen during manual data entry. For instance, our tracking system updates shipment statuses worldwide instantly, providing essential data that helps manage our logistics more efficiently.

I think; that by removing the burden of routine tasks, automation contributes to better job satisfaction. It takes over activities like organizing and managing data, which can be time-consuming. This not only lowers stress but can also make our workdays more fulfilling.

Published: May, 2024