Automated Data Flow (ADF)



Introduction

It is well established that well developed and inclusive financial systems are associated with more rapid growth and better income distribution. Finance helps the rural poor to catch up with the rest of the economy as it grows. It also extends the range of Individuals, households, firms that can get a foothold in the modern economy and thus reduce damaging concentrations of economic power. There is now a greater sense of appreciation of "Empowerment" dimension of finance to the extent it can give ordinary people and poor an access to opportunity and ability to escape conservative social structure.


    Key Benefits of ADF are:
  1. Enhanced data quality and security
  2. Improved productivity and efficiency
  3. Faster submission process
  4. Reduced Total Cost of Ownership (TCO)
  5. Extensible platform for centralized MIS and Analytics
  6. Eliminating manual intervention so that incidences of human error are substantially reduced
  7. Ensure data integrity and accuracy and timely reporting
  8. To improve upon bank's MIS and Decision Support Systems (DSS)


Banks are required to achieve automation of data flow which is explained below. The conceptual end state architecture representing data acquisition, integration, conversion and submission is represented in the Figure given below. Under this architecture, the Data Integration & Storage layer would integrate and cleanse the source data. Subsequently, the Data Conversion layer would transform this data into the prescribed formats. The transformed data would then be submitted to Reserve Bank by the Data Submission layer.


Alternate Text

We have, at D2K, a strong dedicated team of experienced Bankers and technocrats which is implementing ADF Module in various banks


    Conclusion:
  • An automated data flow from the IT systems of banks to Reserve Bank with no manual intervention will significantly enhance the quality and timeliness of data. This will benefit the Indian banking sector by enabling more informed policy decisions and better regulation. Over and above the regulatory benefits, there will be many other benefits in disguise to the internal operations, efficiency and management of banks.
  • Firstly, the data acquisition layer will ensure higher data quality and higher coverage of source data by the systems in electronic form. This will eliminate any inefficiency in the operational processes of the bank as most of the processes can be automated using systems. Secondly, major benefits will be derived from the harmonized metadata in the bank at the data integration & storage layer. This will allow for smooth communication and lesser data mismatches within the bank which often take place due to difference in definitions across departments. Thirdly, the centralized data repository will have a wealth of data that can be easily leveraged by the bank for internal reporting purposes. This not only saves major investment towards a separate reporting infrastructure but also provides management of the banks with recent data of high quality for decision support. The automated flow of data from banks will have significant benefits in costs and efficiency of the returns submission process itself. The automated flow of data will ensure a smooth process with minimal delays in submission of data that is of much higher quality.
  • The realization of the above benefits will accrue to the bank when the automation of data flow is accomplished and the final automated end-state has been reached. The end-state has a layer-by-layer flow of data from the place where it is generated to the central repository and finally to Reserve Bank.

Layered Structure

    The details of each layer are enumerated below:
  • Data Acquisition Layer: The Data Acquisition layer captures data from various source systems e.g. - Core Banking Solution, Treasury Application, Trade Finance Application, etc. At D2K - Data acquisition shall be attempted through ETL tool and data in the physical form shall be migrated through Gap Data Interface/Excel worksheet after exploring the feasibility
  • Data Integration & Storage Layer:The Data Integration, Storage layer extracts and integrates the data from source systems with maximum granularity required for Reserve Bank returns and ensures its flow to the Centralized Data Repository (CDR). Banks having a Data Warehouse may consider using it as the CDR after ensuring that all data elements required to prepare the returns are available in the Data Warehouse. To ensure desired granularity, the banks may either modify source systems or define appropriate business rules in the Data Integration & Storage layer. At D2K - Extracted data would be housed in a staging server and then integrated to the CDR. Solution has proprietary granular masters to ensure modification and define appropriate business rules. We propose to undertake exercise of mapping of parameterized information available in source data system with the masters available in proposed solution and validate the same against the information required to generate reports under scope. We expect that data available in source system is being maintained at granularity level required for report generation under the scope of solution. The conversion/ transformation/ consolidation of granular information will be implemented at data integration stage as per requirements to the extent permissible from source data.
  • Data Conversion: This layer converts the data stored in the CDR to the prescribed formats using pre-defined business rules. The data conversion structure could be in the form of a simple spreadsheet to an advanced XBRL instance file. The Data Conversion layer will also perform validations on the data to ensure accuracy of the returns. Some common validations like basic data checks, format and consistency validations, abnormal data variation analysis, reconciliation checks, exception report, etc. would be required to be done in this layer. At D2K - The data stored in the CDR shall be retrieved based on extant business logic in consultation with the Bank. As per the requirement and in consultation with the Bank the data conversion to reports shall be addressed. The output formats/ layouts will be provided by the Bank. As mentioned above, majority of business rule like business consistency validations, abnormal data variation analysis, reconciliation checks, exception report, for validating the data will be finalized and implemented at the time of data extraction stage to ensure that validated data is populated in CDR. Extant business logics in relation to format checks will be addressed at report generation level
  • Data Submission Layer: The Data Submission layer is a single transmission channel which ensures secure file upload mechanism in an STP mode with the reporting platforms like ORFS. In all other instances, the required returns may be forwarded from the bank's repository in the prescribed format. The returns submission process may use automated system driven triggers or schedulers, which will automatically generate and submit the returns. When the returns generation process is triggered, the system may check if all the data required to generate this return has been loaded to the central repository and is available for generating the return. It may start preparing the return only after all the required data is available. The Data Submission layer will acknowledge error messages received from Reserve Bank for correction and further processing. At D2K - Based on the discussions and business rules finalized with the Bank, specified triggers / schedulers would be designed to generate respective reports. Business rules for ensuring the correctness of data would be implemented at data extraction / cleansing stage so that error free data is reported. Such business rules would be finalized in consultation with the bank. The delivery mechanism / mode of RBI reports would be provided by the Bank.
The framework for acknowledging messages received from RBI and further action in this regard would also be provided by the Bank.


Automated Data Flow ADF Online Demo

Please Clik on the link given below to to visilt the live application Demo.

Comming Soon...


Automated Data Flow ADF White Papers

Comming Soon...