Python Data Ingestion sounds like a car crash between nature and technology , but in an AI project we are talking about an interface pipeline into a python (or similar) script.
Data Ingestion might seem like just data import re-branded for an age where data consumption is, well, all-consuming and where rich datasets, ideally pre-labelled, can differentiate an organisation’s services, products and promotions from competitors. But there is a significant cost element as well – the risk of not properly architecting “data pipelines” can be fragmented, redundant or broken “zombie API” interfaces, all contributing to more technical debt. Gartner in fact suggest actively managing and reducing technical debt in 2023 can lead to a 50% faster service delivery.
Efficient data pipelines should be underpinned by cloud storage and compute, with data storage solutions from Lakehouse, Data Lake or Data Warehouse to SQL or NoSQL-based data, and cloud computing services (EC2 instances, VPNs, Clusters etc.) and APIs. And this typically means e.g. an AI Engineer handling (via APIs) multiple data formats such as Parquet, tar file formats and pickle (model) data types, as well as structured and unstructured, scheduled and batch, streaming and transactional data types.
At ce.tech, we work with all of these formats - take a look at this Python Dash boilerplate for Predictive Maintenance as an example of how we can implement a streaming solution for your organisation. Press Start!
The above image was created using OpenAI Generative AI with DALL-E . A modified version of GPT-3 is used to generate images - a close relative of ChatGPT. See https://openai.com/dall-e-2/
Comments