Senior Data Engineer (Ca Or Remote)

Tenjin Inc

Date listed

2 months ago




We're looking for a Senior Data Engineer experienced with streaming data processing and generating reporting metrics . Our ideal candidate has worked with stream processing tools and frameworks (such as Kafka, Flink, Storm, etc), analytics datastores (such as Redshift, BigQuery, Druid, ClickHouse, Snowflake), is comfortable managing infrastructure with Kubernetes, and enjoys working in a remote but collaborative environment.

This position is ideally in San Francisco. Remote can be considered but you must be based in the US time zone.

Team, Culture, and Product

We're a 7-person engineering team in a 33-person company and are looking to rapidly and responsibly grow our team. We're remote-heavy, spread across three continents with hubs in Berlin and San Francisco. We aim to help mobile app developers make data-driven decisions about their marketing in a secure, privacy-centric, user-friendly way.


We run our web and data processing services on Amazon EKS (Elastic Kubernetes Services). Most of our web services are in Go and most of our data processing services are in Java. We also have legacy data collection and backend services that are written in Ruby.

We've started using Kafka and Flink to process events in some of our newer pipelines. We use S3, Redshift, and DynamoDB for storage and serve aggregated reports that support our APIs and dashboard from Postgres.

Our user-facing dashboard is a Ruby on Rails web application with a React JS frontend.

None of this is set in stone - we're very open to experiments and prototyping new approaches with other technology. We hope you'll be able to provide some experience-based guidance and fresh perspective here!


- Participate in major architecture and software design decisions about our reporting pipeline

- Develop and maintain services for generating and surfacing reports to our customers in a performant, low-latency manner

- Prototype new data pipelines and datastores

- Work cross-functionally with design and product to set requirements

- Set up monitoring, testing, and integrity alerts across our existing and new data pipelines


Required Skills

- Experience with a stream processing framework such as Flink, Storm, or Spark

- Experience with big data analytics datastores such as Redshift, BigQuery, Druid, ClickHouse, or Snowflake

- Experience with Git and Github or similar

- Experience with Java and SQL

- Excellent organization and communication skills


- Experience with data platforms such as dbt or Dremio

- Experience with mobile marketing

- Experience with Ruby or Go

Findwork Copyright © 2021


Let's simplify your job search. Receive your tailored set of opportunities today.

Subscribe to our Jobs