- Interact with stakeholders to understand and implement new requirements
- Bring best practices to the table to enhance data warehouse efficiency
- Build processes and methodologies to implement new solutions
- Design and build data pipelines for data movement and transformation across domains
- Contribute and create a motivating work environment within the team
- Mentor other team members and promote opportunities
- Bachelor’s degree in fields of Computer Science, Statistics or an equivalent
- Should have minimum of 4 years of experience in building ETL pipelines
- Should have experience working with big data systems
- Should have experience in working with Hadoop, Spark, Hive.
- Should have experience working with either of the NoSQL database – Hbase, MongoDB, Redis.
- Should have the ability to design and build big data pipelines
- Should have proficiency in coding in Python or Scala.
- Should have the ability to write complex SQL, Stored procedures, functions, triggers.
- Should have experience in working with all types of data sources – database, CSV, Excels, JSON, XML, API’s, others.
- Should have experience with command-line scripting – automating jobs, managing services.
- Should have experience in deploying workloads in a production environment, troubleshooting defects, improving performance
- Should have experience in Splunk or Google Analytics
- Should have experience working with Cloud-based technologies. Experience with AWS is a plus.
- Should possess good communication skills and ability to interact with Customers
Submit Your Application
Kehlani O. USA
Priyanka S. India
Josh E. USA
Sarah S. USA
Trent W. USA
Isabel R. Spain
Facundo A. Argentina
Jessica K. Canada