Data Engineer at Solv Kenya

Data Engineer at Solv Kenya

  • Full Time Jobs
  • Kenya

Solv Kenya

About the job

We at Solv rely on incredibly insightful data to guide our decisions, and to get the most of our data and models. We are looking for a Data Engineer to help us discover the information hidden in vast amounts of data, that will help us make smarter decisions to deliver even better products. The ideal candidate will possess the necessary mathematical and statistical skills, as well as a rare curiosity and originality. Although the position will require you to wear many hats, you will primarily be responsible for working closely with the business to identify issues and use data to propose solutions for effective decision making by extracting insights and knowledge from data using various analytical, statistical, and machine learning techniques. Beyond technical prowess, you will need soft skills for clearly communicating highly complex data trends to business leaders. In overall, you’ll aim for efficiency by aligning data systems and processes with our organisational objectives.

  • Develop and maintain conceptual, logical, and physical data models using industry-standard modeling techniques and tools.
  • Design and optimize data storage solutions, including data warehouses and ensure data integrity, sufficiency and accuracy.
  • Develop and maintain data pipelines from different data sources for efficient data extraction, transformation, and loading (ETL) processes.
  • Performing data ingestion from diverse sources into AWS environment.
  • Constructing data models for gathering information from various sources and storing it effectively.
  • Collaborate with data analysts, data scientist, and software engineers to understand data requirements and deliver relevant data sets (e.g., for business intelligence, credit risk modelling ).
  • Develop world-class capabilities in analysis and data mining to provide answers to business performance queries
  • Document data pipelines, processes, and best practices for knowledge sharing.
  • Develop innovative and analytical approaches to provide more meaningful management of information.
  • Manage individual projects regarding the optimal extraction, transformation, and loading of data from a wide variety of sources into data pipelines.
  • Manage the identification, design, and implementation of internal process improvements: automating manual processes, optimizing data delivery, redesigning architecture for greater scalability.
  • Manage the implementation of changes to data systems to ensure compliance with data governance, protection, privacy, and security requirements.
  • Develop a scalable data infrastructure and comprehend distributed systems concepts from a data storage and compute standpoint
  • Develop business reports to closely monitor business growth, revenues.
  • Keeping abreast of changes to technology, systems, procedures and operational processes as well as Business requirements and aligning systems to address the impact of these changes.
  • Develop and automate performance-tracking methods for assessment of all customer profitability and performance indicators for circulation to business heads and executive management.
  • Continuous development and improvement of reporting tools, BI.

Requirements

  • University degree in Computer Science, science, statistics, applied mathematics, data
  • management, information systems, information science or a related quantitative field.
  • Minimum 2+ years’ of expertise in designing, implementing large scale data pipelines for data curation, feature engineering and machine learning, using Spark in combination with PySpark, Java, Scala or Python; either on premise or on Cloud (AWS, or Azure).
  • Experience in programming languages particularly SQL, Python or R in addition to expertise in data modelling.
  • Data Visualization skills: Power BI (or other visualization tools), DAX programming, API, Data Model, SQL, Story Telling and wireframe design
  • Data Engineering skills: Python, SQL, Spark, Cloud Architect, Data & Solution Architect, API, Databricks, Azure
  • Experience supporting cross-functional teams and collaborating with stakeholders in support of data analytics initiatives is desirable.
  • At least one year of experience in designing and building streaming data ingestion, analysis, and processing pipelines using cloud native technologies like Kafka and Spark Streaming.

Interested applicants can apply via this link.

To apply for this job please visit bit.ly.