Job descriptions & requirements
RocketDevs empowers software engineers by bridging talent with global tech opportunities. We connect skilled developers with innovative projects, fostering an inclusive tech community.
We are seeking a talented Data Engineer to join our team and contribute to client projects that drive data-driven decision-making and technological advancement. This is a remote position focused on building robust data pipelines and scalable data systems for our global clients.
Key Responsibilities
- Design, build, and maintain scalable data pipelines and ETL/ELT workflows.
- Develop and manage data architectures, including data lakes, warehouses, and streaming systems.
- Work with both structured and unstructured data across SQL and NoSQL systems.
- Optimize data storage, retrieval, and processing for performance and scalability.
- Integrate data from various sources (APIs, third-party services, internal systems).
- Ensure data quality, integrity, and reliability across pipelines.
- Collaborate with data scientists, analysts, and engineers to support data use cases.
- Explore opportunities to leverage AI tools to improve data workflows or insights generation.
- Implement monitoring, logging, and alerting for data systems.
- Participate in architecture discussions, code reviews, and sprint planning.
Qualifications
- Strong proficiency in Python or another data-focused language (e.g., Scala, Java).
- Experience with sp
- Experience building and maintaining ETL/ELT pipelines.
- Solid understanding of SQL and experience with relational databases (e.g., PostgreSQL, MySQL).
- Experience with NoSQL databases (e.g., MongoDB, Cassandra, DynamoDB).
- Familiarity with data warehousing solutions (e.g., BigQuery, Redshift, Snowflake).
- Experience with distributed data processing tools (e.g., Apache Spark, Kafka, Flink).
- Knowledge of REST APIs and data integration techniques.
- Familiarity with cloud platforms (AWS, GCP, or Azure).
- Experience with version control (Git & GitHub), CI/CD pipelines, and containerization (Docker).
- Understanding of data modeling, schema design, and data governance principles.
- A deployed project or pipeline (personal or professional) demonstrating real-world data engineering work is required.
Nice to Have
- Experience with orchestration tools (e.g., Airflow, Prefect, Dagster).
- Familiarity with real-time data streaming architectures.
- Experience with data observability and monitoring tools.
- Exposure to AI/ML pipelines or feature engineering workflows.
- Familiarity with vector databases or embeddings for AI-powered applications.
- Knowledge of tools like dbt for data transformation.
- Experience optimizing data costs and query performance in cloud environments.
- Experience Data visualization tools (e.g Tableau, Power BI)
Hiring Process
We ensure a fair and transparent process for every applicant:
Apply → Take a Compulsory 30-minute Assessment → 30-minute Onboarding Interview → Final interview & selection
Note:
- The assessment is mandatory. Please apply only if you’re willing to take it.
- Having a deployed project (personal or professional) that demonstrates your ability to build and manage real-world data systems is a must.
<
Important safety tips
- Do not make any payment without confirming with the Jobberman Customer Support Team.
- If you think this advert is not genuine, please report it via the Report Job link below.