About Twinstake
Twinstake is a non-custodial institutional grade staking as a service provider. Twinstake operates validators across a wide range of Proof-of-Stake protocols and allows institutional clients to delegate their assets using their preferred custodians to collect staking rewards. Twinstake charges a commission on the staking rewards for operating the validators and providing analytics and reporting.
Twinstake’s differentiation is based on:
- Institutional grade security and compliance (e.g., permissioned validators)
- Superior transparency and analytics, including dashboard and reporting
- Processes designed with institutional workflow in mind
About the role
We are seeking a highly skilled Senior Data Engineer who is passionate about building and optimizing data systems. In this role, you will lead the development of our data pipelines, databases and systems, ensuring they are scalable, efficient, and reliable. You will collaborate closely with product engineers, analysts, and cross-functional teams to design robust data models and drive innovation in data engineering practices.
What you will contribute:
- Lead the development and optimization of our data pipelines, databases, and systems for transforming data for our client needs
- Ensure scalability, efficiency, and reliability in all data systems
- Collaborate with product engineers and analysts to design and implement robust data models
- Drive innovation by staying updated with the latest data engineering practices, tools, and technologies
- Apply new techniques to solve complex business and data challenges
- Work closely with cross-functional teams to align data engineering initiatives with business objectives and customer needs
- Engage with clients to understand their data needs and deliver tailored solutions that address their specific challenges
- Produce scalable and efficient code, setting the standard for the team’s coding practices
- Perform code reviews and provide constructive feedback to ensure continuous improvement
What you bring:
- Hands-on experience in the crypto industry
- Extensive experience in data engineering, with a proven track record in building and scaling data systems
- Fluent in SQL, Python or other modern languages, like Go, Rust
- Experienced with modern cloud-based database technologies for both batch and streaming workloads (e.g., BigQuery, Databricks, Snowflake, ClickHouse)
- Experienced with one of the orchestrators (e.g., Airflow, Dagster, Argo Workflows)
- Knowledgeable in ingestion frameworks/systems (e.g., Airbyte, Fivetran, Meltano)
- Skilled in data transformation tools (e.g., dbt, Pandas, Apache Beam)
- Familiar with data quality frameworks (e.g., Great Expectations, Monte Carlo Data or simply dbt tests)
- Excellent communication skills, with the ability to collaborate effectively in a remote work environment across multiple time zones
- Experienced in engaging with clients to ascertain their needs and develop superior solutions
- Demonstrated ability to lead by example, driven by best practice and to provide both technical direction and support
- Comfortable working with cross-functional teams to achieve alignment on business objectives and technical initiatives
What makes you stand out:
- Hands on approach and comfortable working in a small team
- Detailed, motivated self-starter who thrives on working in complex and challenging environments of a rapidly evolving business
- Willingness to learn and take on new challenges
What we offer:
- Exposure to innovative technologies in web3 and blockchain technology
- Great internal growth and development opportunities
- On the job training
- Competitive compensation, benefits, and perks