@Metigy we are creating the next generation of AI-driven Marketing Technology (Martech) for business by helping them discover and unlock the secrets hidden in their data. Our AI is built on state-of-the-art data mining and delivered using the art of storytelling delivered through statistics and gamification. We aim to delight and reward our customers by improving their marketing performance and delivering best in class tools.
As a data engineer @Metigy, you will be working on collecting, storing, processing and analyzing huge sets of data from a large number of data sources. You will be working with the data scientists to design and deliver data pipelines that give our customers real-time stories and statistics that will drive insights, trigger actions and reward our customers.
Our Mission is to help users succeed with our marketing assistant technology.
What you’ll be working on
- You will be helping to architect solutions and selecting and integrating the Big Data tools and frameworks that provide real-time value and insights
- Create, maintain, monitor and evolve full ETL processes on AWS infrastructure recommending and implementing improvements incrementally
- Collaborating with Data Scientists and Analysts to identify data opportunities and then test and deliver new functionality to grow the product
- Data Warehousing massive sets ready for mining in the future
- From a vague question, you can come up with a thorough data-driven answer, doing ad-hoc research and analysis, presenting results in a clear manner and occasionally publishing research
- Work with the startup stakeholders to assist with data-related technical issues and support our data infrastructure needs
What Data Engineer skills do you need
- You’ve worked for over 5+ years as data engineer working in Data Warehousing, Business Intelligence and Big Data processing
- Proficient understanding of distributed computer systems delivered in the cloud using a Microservices based architecture deployed through AWS
- Deliver pipelines and implementation of large distributed data storage systems (Kinesis, Spark, Redshift, S3, etc)
- You have designed and implemented the collection, analysis and optimisation of data sets from a variety of sources
- Solid, practical experience in programming languages like Scala / Python / Node.js / Java
- You are focused on delivering solutions and have a sense of accountability for the delivery and are accustomed to working in a fast-paced, high-pressure startup environment
- Strong project management and organizational skills and able to similarly work on and deliver solo projects
- Proven history of manipulating, processing and extracting value from large disconnected datasets using distributed systems
- Working knowledge of deploying and scaling ML / Deep Learning models into production
Above all, you have the drive to learn and master new technologies and techniques continually. We live in a beautiful, ever-changing world that happens in real time, all the time. And our goal is to help our customers conquer that new frontier, one great recommendation at a time!
You are preferably Sydney based, and eligible to work full-time in Australia already.
Strictly no recruiters