Title: Data Engineer - Pyspark
Job Description
Mandatory Skills: PySpark .
Experience: 6-8 Years .
• 7+ years relevant work experience in the Data Engineering field
• 5+ years of experience working with Hadoop and Big Data processing frameworks (Hadoop, Spark, Hive, Flink, Airflow etc.)
• 5+ years of Strong experience with relational SQL and at least one programming language such as Python.
• Experience working in AWS environment primarily EMR, S3, Kinesis, Redshift, Athena, etc.
• Experience building scalable, real-time and high-performance cloud data lake solutions
• Experience with source control tools such as GitHub and related CI/CD processes.
• Experience working with Big Data streaming services such as Kinesis, Kafka, etc.
• Experience working with NoSQL data stores such as HBase, DynamoDB, etc.
• Experience with data warehouses/RDBMS like Databricks, Snowflake & Teradata
• Ability to debug issue and assist team to resolve the challenges
• Ability to translate to functional requirement to design, develop and deploy (From Data Engg perspective)
͏
͏
͏
͏
Deliver
No | Performance Parameter | Measure |
1 | Process | No. of cases resolved per day, compliance to process and quality standards, meeting process level SLAs, Pulse score, Customer feedback, NSAT/ ESAT |
2 | Team Management | Productivity, efficiency, absenteeism |
3 | Capability development | Triages completed, Technical Test performance |
Experience: 5-8 Years .
Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome.