This is an outstanding opportunity to join a rapidly growing, series A, remote-friendly fintech with a dynamic team. You will be building the future of trade finance and SME lending using cutting-edge AI/ML techniques.
We seek an ambitious and self-motivated Data Engineer to join our core engineering team to keep up with customer demand. The ideal teammate is an experienced professional looking to take ownership of critical components of our proprietary machine learning platform.
As a Data Engineer, you will be responsible for:
- Designing and implementing highly-scalable data pipelines for real-time prediction, streaming and batch processing.
- Working with a large amount of structured and unstructured datasets.
- Working alongside other engineers, data scientists, and product teams to build AI decision engines for financial institutions and Fortune 500 corporations.
- Enabling data scientists to iterate quickly and deploy machine learning models into production.
- Highlighting and reviewing data anomalies and quality issues.
- Performing explorative analysis on external and public data sources to pull them into our ecosystem.
- Following agile processes with a focus on delivering production-ready testable code in small iterations.
You should be:
- Curious to learn and assimilate information quickly, enthusiastic to share and teach others.
- Keenly analytical and neurotic about problem-solving.
- An outstanding communicator with sound interpersonal skills.
- Strongly interested in technology and continuous learning.
- Able to work autonomously and resourcefully in a fast-paced startup environment.
You should have:
- 3+ years of professional data engineering experience in data-intensive environments.
- Familiarity with concepts in data mining, data engineering, and backend API development.
- Advanced skills and experience using Python, SQL, Bash and other scripting/compiled languages.
- Demonstrated knowledge of data workflow systems such as Apache Airflow.
- Demonstrated knowledge of distributed computing frameworks such as Apache Spark.
- Demonstrated exposure to the Python machine learning ecosystem, including Pandas, Flask and ETL.
- Experience with or exposure to Amazon Web Services (AWS), Google Cloud Platform (GCP), Git, Jenkins and Docker-based continuous-integration & deployment pipelines.
- Proven track record of collaboration with a variety of decision-makers to drive impact.
- Knowledge of Business Intelligence Tools such as Tableau.
- Experience in the fintech industry a plus.