To effectively manage the data flow, it is essential to implement robust ingestion pipelines. These pipelines must be able to handle both real-time and batch data loading. Ingestion pipelines ensure that data arrives in a timely and orderly manner, ready for further processing. Once collected, the raw data is inserted into the first storage layer, known as the Standard Layer. This layer serves as a temporary repository where data is organized according to the structures defined in the YAML files. The ingestion pipelines, guided by the YAML files, manage both real-time and batch loading, ensuring that data is promptly available for subsequent processing stages.