Portland OR, Los Angeles CA, Las Vegas NV, Vancouver, BC, Dallas TX, Austin TX, Phoenix, AZ
My client is driving the future of entertainment by accelerating the convergence of sports, video games and media for a booming mobile-first audience worldwide. The company’s platform empowers mobile game developers and players with access to fun and fair competition for real prizes, shifting the paradigm to make esports accessible to anyone, anywhere. My client helps developers build multi-million-dollar game franchises by turning content into competitive social gaming properties for the world’s 2.6 billion gamers. The company has already worked with 20,000 game developers, leveraging its patented technology to host billions of casual esports tournaments for 30 million players worldwide.
What you’ll do:
Build new data systems for an online platform as well as assisting the data science team in creating and deploying new algorithms for matchmaking and deception and fraud detection.
- Develop new systems to deliver real-time streaming analytics and event processing pipeline based on fast data architecture to handle throughput over millions of events per second.
- Create business grade data lake to assist both business analytical needs and next generation data infrastructure.
- Develop data integration toolkit to build and manage automated and efficient data pipelines that will consume data from backend services into a data repository.
- Support our data science team in developing and deploying new processes for matchmaking, fraud, and cheat detection.
- Study technical solutions to move large data sets from a range of sources to formats consumable by reporting systems and analysts.
- Build infrastructure for and enable monitoring and alarms relating to data integrity and data systems health.
- Control and provide to industry best practices involving proper use of source control, code reviews, data validation and testing
- Strengthen our product development team in creating new events to measure/track business growth.
- Create and implement engineering best practices and collaborate in the design of effective streamlined processes with partner teams.
Bachelor’s degree in Computer Science, Management Information Systems, Statistics, Operations Research or a similar field (or foreign equivalent degree), plus 3 years of experience in data management, data engineering and/or software engineering.
- 3 years of involvement in designing, applying, automating and upkeep of large-scale ETL processes, providing expertise with ETL practices like idempotency, retry and backoff approaches, data parsing techniques, invalid data handling, data staging, code reuse.
- 3 years of familiarity working with RDBMS technologies including MySQL.
- 2 years of involvement with data warehousing technologies (SQL Server, Snowflake and/or Redshift).
- 3 years of building and managing pipeline authoring tools and running active data pipelines in production using Airflow.
- 2 years in creating containerized applications and hosting data systems in container instrumentation platforms like Kubernetes.
- 2 years with event streaming and conversion from source to destination using distributed streaming systems Apache Nifi or Apache Flink.
- 3 years ANSI SQL, and including the following: Python, Java, Scala.
- 3 years’ experience in rapidly prototyping a data product from scratch and leading the implementation in production.
- Familiarity with operating outside of business hours incidents and acting on a production environment autonomously to address critical issues.
For more information or to apply, please contact: firstname.lastname@example.org
Salt is acting as an Employment Agency in relation to this vacancy.
Job Reference: JO-2012-196443
Salary per: annually
Job Start Date: ASAP
Job Industries: Software Engineering Jobs
Job Locations: California
Job Types: Permanent
Job Skills: Data Lake, Python, Snowflake, Spark