User

Data Engineer

BMT Score
86
86%
  • Remote

Available for

About Aditya T

  • Working as AWS Architect with 8+ years’ experience.
  • Experience of working with giants of three major domains.
    • Morgan Stanley [Banking Domain]
    • Assurance [Insurance Domain]
    • Eli-Lilly [Healthcare Domain]
    • Freecharge [Banking Domain]
  • Experience in developing applications using cloud AWS, Python, S3, EMR, Lambda functions, Dynamo DB, Redshift, SNS, SQS, Kinesis.
  • Experience in creating various Dashboards on TIBCO Spotfire, Kibana.
  • Having 7+ Years of Experience in working Etl platform like Apache Spark, DataBricks, Aws Glue etc.
  • Experience in developing Datalake applications using Hadoop, sqoop hive, spark, spark streaming, Impala, yarn, flume.
  • Experience in agile methodologies and DevOps
  • Experience in developing applications using UNIX, shell scripting.
  • Experience in working with multiple schedulers like Active Batch, Cloud Watch, Cron, Autosys, TWS and Oozie.
  • Implemented LDAP and Secure LDAP authentication on Hadoop, Hive, Presto and Starburst Presto.
  • Experience in working with different authentication mechanism as LDAP, BATCH & KERBROSE.
  • Good understanding of Software Development Life Cycle Phases such as Requirement gathering, analysis, design, development and unit testing.
  • Strong ideation and conceptualization skills, been a sought after person for many POCs.
  • Developed multiple utilities as Auto cleanup, Workbench, ILM, loggings integration, Hadoop jobs automation, mainframe interaction and automation of procedure using Python, Unix shell scripting which is used throughout account.
  • Self-motivated & team-motivator, proficient communication skills aided with a positive and ready to learn attitude.
  • Goal-oriented, autonomous when required, good learning curve and appreciation for technology.

Work Experience

Images

AWS Architect

  • January 2013 - August 2023 - 10 Year
  • India

Projects

Images

AWS Data Engineer – DataHub Integration

  • August 2022 - August 2023 - 13 Months
Role & Responsibility
    To build DataHub by migrating data from different sources like elastic search, rds, redshift, flat files, Apache Spark etc. Also to make sure that customer’s data will  be encrypted end to end.
    Accomplishments include:
    Identify tools and technologies to be used in order to make sure smooth flow of high volume data.
    Designed architectural pipeline
    Played a key role designing the pipeline and its components.
     
...see less
Images

AWS Architect — Migration Accelerator

  • October 2020 - July 2022 - 22 Months
Role & Responsibility
    Migration Accelerator is one of a kind platform leveraging users/customers to migrate the bigdata application to public cloud like AWS, Azure. Our accelerator supports different aspects of migration such as Data, Workload, Meta-store, Security/Governance, Apahe SparkOrchestration Migration,databricks.
    Accomplishments include:
    Architecture design for each section to make sure a smooth migration considering the Big Data scenario.
    Played a key role designing the pipeline and its components.
    Integrated all the different divisions into a Web UI to ensure one click operation.
...see less
Images

AWS Architect — Solution Accelerato

  • February 2020 - May 2020 - 4 Months
Role & Responsibility
    According to a banking domain use case, we need to build a platform that will be sufficient to ingest, transform, train, and deploy model to predict the outcome. The Solution Accelerator platform was created with complete capabilities of DataOps and MLOps. It would be used for Execution and Automation of all Data pipeline and ML flow.
    Accomplishments include:
    Architecture design to implement Data platform, Data and ML Ops. Incorporated the newly launched features of SageMaker like Autopilot, Notebook Exp, Experiments etc.
    Established functionality of Auto Model creation and deployment to enable ML OPS.
    Enabled one click Flow for dataset upload, transform, prediction & visualise. This was provisioned after considering the laymen customer’s need.
...see less
Images

Senior Data Engineer — Eli Lilly DDR

  • July 2019 - December 2020 - 18 Months
Role & Responsibility
    Eli Lilly is one of the healthcare giants. They come up with an idea to execute clinical trials for different studies (for ex., Migraine, Diabetes, Covid 19 etc). Our main objective was to capture the sensor details like heart rate, acc, gyro, migraine etc of a patient and send it for monitoring and study purpose over streams to the DDR admins and data scientist.
    Accomplishments include:
    Designed a data pipeline to ingest and process the clinical trials studies which was coming in both batch and streaming manner.
    Created stream flow from sending data to kinesis and displaying it to kibana.
    To enable the access of studies cleansed data to the data analysts and scientist. Created a python utility using which Data scientists can access s3 data from on premise.
    Create a Data Generator which will send dummy data into kinesis and can upload one or multiple files into s3 from onpremise.
     
...see less
Images

Senior Data Engineer — Information Lifecycle Man

  • March 2018 - May 2019 - 15 Months
Role & Responsibility
    When GDPR compliance was made mandatory in the European countries, no firm was allowed to keep data older than 7 years. To make our vertical compliant with the GDPR standards, ILM was introduced.  The framework was created, which will keep the track of objects/files uploaded on AWS S3. And each and every object on s3 was assigned with an Archival and Purge deadline. Due to this framework, the object was automatically archived and purged, and lineage was also maintained for all the application objects. 
    Accomplishments include:
    Designed the architecture to make the process interactive, flexible, and secure.
    As the framework was dealing with each single unit, so established a failure handling framework and prepare it for disaster recovery.
    Created a single point of logging so as to maintain the lineage of each object lifecycle.
...see less
Images

Data Engineer — DIH-RAR Morgan Stanley

  • September 2015 - October 2017 - 26 Months
Role & Responsibility
    Morgan Stanley is one of the banking domain leader. To overcome the with the disasters and make the applications more security compliant, the Wealth Mangement Group of MS comeup with an idea of Ra-Remediation. The objective was to make the applications more parameterized by removing the hardcodings, revamping the traditional architectures etc. 
    Also created a single storage zone to make sure that all the data should have a single stop shop using the Data Integration Hub Framework.
    Datalake Integration Hub : Data Ingestion of Teradata/DB2/MF/GP/MYSQL wealth management applications into Hadoop. Main objective of the project is to make the processing on same zone and create hadoop as the dumping/analysis ground for all the WM applications, so that the business could impact positively. 
    Accomplishments include:
    As part of RAR framework, worked on remediating multiple application which includes unix shell scripts, python, java file, Informatica etc.
    Worked on the integration of various applications like Applications: RNC, PADT, NBA, Advisory.
...see less

Industry Expertise

Education

Education

in Bachler of Engineering-

Indore University
  • June 2011 - June 2014

Our Suggestions