User

Data Engineer

BMT Score
76
76%
  • Hybrid

Available for

About PAVANI V

  • Working experience with Linux lineup like Redhat and CentOS. 
  • Experience in designing and building Data Management Lifecycle covering Data Ingestion, Data integration, Data consumption, Data delivery, and integration Reporting, Analytics, and System-System integration. 
  • Proficient in Big Data environment and Hands-on experience in utilizing Hadoop environment components for large-scale data processing including structured and semi-structured data. 
  • Strong experience with all phases including Requirement Analysis, Design, Coding, Testing, Support, and Documentation. 
  • Extensive experience with Azure cloud technologies like Azure Data Lake Storage, Azure Data Factory, Azure SQL, Azure Data Warehouse, Azure Synapse Analytical, Azure Analytical Services, Azure HDInsight, and Databricks. 
  • Solid Knowledge of AWS services like AWS EMR, Redshift, S3, EC2, and concepts, configuring the servers for auto-scaling and elastic load balancing. 

Tech Stack Expertise

  • Tech Stack Expertise

    Microsoft .Net

    AJAX,Cassandra

    4 Years
  • Tech Stack Expertise

    Scripting Language

    jQuery,JavaScript,Json

    6 Years
  • Tech Stack Expertise

    C++

    C++

    2 Years
  • Tech Stack Expertise

    HTML

    HTML,HTML5,DHTML

    6 Years
  • Tech Stack Expertise

    CSS

    CSS

    2 Years
  • Tech Stack Expertise

    AWS

    AWS S3

    2 Years
  • Tech Stack Expertise

    Python

    Python

    2 Years
  • Tech Stack Expertise

    C

    C

    2 Years
  • Tech Stack Expertise

    MongoDB

    MongoDB

    2 Years

Projects

Images

CRM

  • January 2020 - October 2022 - 34 Months
Technologies
Role & Responsibility
    • Evaluating client needs and translating their business requirement to functional specifications thereby onboarding them onto the Hadoop ecosystem. 
    • Worked with business/user groups for gathering the requirements and working on the creation and development of pipelines.
    • Migrated applications from Cassandra DB to Azure Data Lake Storage Gen 1 using Azure Data Factory, created tables, and loading and analyzed data in the Azure cloud. 
    • Worked on creating Azure Data Factory and managing policies for Data Factory and Utilized Blob storage for storage and backup on Azure.
    • Worked on developing the process and ingested the data in Azure cloud from web service and load it to Azure SQL DB. 
    • Worked with Spark applications in Python for developing the distributed environment to load high volume files using Pyspark with different schema into Pyspark Data frames and process them to reload into Azure SQL DB tables. 
    • Designed and developed the pipelines using Databricks and automated the pipelines for the ETL processes and further maintenance of the workloads in the process. 
    • Worked on creating ETL packages using SSIS to extract data from various data sources like Access database, Excel spreadsheet, and flat files, and maintain the data using SQL Server. 
    • Worked with ETL operations in Azure Databricks by connecting to different relational databases using Kafka and used Informatica for creating, executing, and monitoring sessions and workflows. 
    • Worked on automating data ingestion into the Lakehouse and transformed the data, used Apache Spark for leveraging the data, and stored the data in Delta Lake. 
    • Ensured data quality and integrity of the data using Azure SQL Database and automated ETL deployment and operationalization. 
    • Used Databricks, Scala, and Spark for creating the data workflows and capturing the data from Delta tables in Delta Lakes. 
...see less

Our Suggestions