$12.0 per Hour
Data engineer with 3 years of experience.
Strong desire to move forward, face new challenges, and expand my skill set.
Having good expertise in Hadoop tools like MapReduce, HiveQL, Sqoop, and Spark, Hbase,Oracle
In-depth understanding of Spark Architecture including SparkCore, SparkSql, and Data frames.
Hands-on experience in various big data application phases like data processing, analytics
Experience in creating tables, partitioning, bucketing, loading and aggregating data
Experience on working with cloud services like Amazon Web Services (S3, EC2, EMR, RDS, Redshift, Athena etc.)
Tech Stack Expertise
AWS S31 Years
- January 2019 - December 2022 - 4 Year
Meta Web GUI
- January 2020 - November 2020 - 11 Months
Creation of Hive database into local
Integration of Hive database with Hadoop & MySQL
Get CSV input files from client
Creation of tables & load input files into the table
Work with GUI Team to integrate Hive local database with Meta Web GUI application.
Applying various HQL queries to fetch data from the database.
Attend daily sync up calls & Meetings.
Data Analysis (Credit Card)
- February 2021 - January 2022 - 12 Months
Domain:- BankingDeveloped PySpark programs to extract data from taking delta data from S3 bucket
Apply cleaning operation and multiple transformations and stored in S3 bucket
Used Shell scripting for automation of scripts
Involved in working on Spark SQL Code as an alternative approach for faster data Processing and better Performance.
Used Shell scripting for automation of scripts.
Worked on QA support activities, test data creation, and Unit testing activities.
Fulfill daily requirements given by Scrum master.