Browse
···
Log in / Register

Associate Data Engineer

Negotiable Salary

Bayut | dubizzle

Dubai - United Arab Emirates

Favourites
Share

Description

Bayut & dubizzle have the unique distinction of being iconic, homegrown brands with a strong presence across the seven emirates in the UAE. Connecting millions of users across the country, we are committed to delivering the best online search experience. As part of Dubizzle Group, we are alongside some of the strongest classified brands in the market. With a collective strength of 5 brands, we have more than 123 million monthly users that trust in our dedication to providing them with the best platform for their needs. The Data Engineer will help deliver world-class big data solutions and drive impact for the dubizzle business. You will be responsible for exciting projects covering the end-to-end data life cycle – from raw data integrations with primary and third-party systems, through advanced data modeling, to state-of-the-art data visualization and development of innovative data products. You will have the opportunity to build and work with both batch and real-time data processing pipelines. While working in a modern cloud-based data warehousing environment alongside a team of diverse, intense and interesting co-workers, you will liaise with other teams– such as product & tech, the core business verticals, trust & safety, finance and others – to enable them to be successful. In this role, you will: Raw data integrations with primary and third-party systems Data warehouse modeling for operational and application data layers Development in Amazon Redshift cluster SQL development as part of agile team workflow ETL design and implementation in Matillion ETL Real-time data pipelines and applications using serverless and managed AWS services such as Lambda, Kinesis, API Gateway, REST Api. etc. Design and implementation of data products enabling data-driven features or business solutions Ensure data quality, system stability and security, and coding standards in SQL, Python, ETL design Building data dashboards and advanced visualizations in Periscope Data with a focus on UX, simplicity and usability Working with other departments on data products – i.e. product & technology, marketing & growth, finance, core business, advertising and others Being part and contributing towards a strong team culture and ambition to be on the cutting edge of big data Be able to work autonomously without supervision on complex projects Participate in the regular ETL status check rota Requirements Qualification: Top of class technical degree such as computer science, engineering, math, physics. Experience: 3+ years of experience working with customer-centric data at big data scale, preferably in an online/e-commerce context. 2+ years of experience with one or more programming languages, especially Python. Strong track record in business intelligence solutions, building and scaling data warehouses, and data modeling. Experience with modern big data ETL tools (e.g., Matillion) is a plus. Experience with AWS data ecosystem (or other cloud providers). Experience with modern data visualization platforms such as Sisense (formerly Periscope Data), Google Data Studio, Tableau, MS Power BI, etc. Knowledge: Knowledge of modern real-time data pipelines (e.g., serverless framework, lambda, kinesis, etc.) is a strong plus. Knowledge of relational and dimensional data models. Knowledge of terminal operations and Linux workflows. Skills: World-class SQL skills across a variety of relational data warehousing technologies, especially in cloud data warehousing (e.g., Amazon Redshift, Google BigQuery, Snowflake, Vertica, etc.). Ability to communicate insights and findings to a non-technical audience. Written and verbal proficiency in English. Traits: Attention to detail with strong analytical and conceptual thinking. Business acumen with entrepreneurial spirit and ability to think creatively. Highly-driven with a strong curiosity, self motivated and strive for continuous learning. Thrive in a fast-paced, innovative environment. Living the team values: Simpler. Better. Faster. Benefits A fast paced, high performing team. Multicultural environment with over 60 different nationalities Competitive Tax-free Salary Comprehensive Health Insurance Annual Air Ticket Allowance Employee discounts at multiple vendors across the emirates Rewards & Recognitions Learning & Development #UAEdubizzle

Source:  workable View original post

Location
Dubai - United Arab Emirates
Show map

workable

You may also like

Workable
Data Engineer
About DeepLight: DeepLight is a pioneering AI company committed to pushing the boundaries of innovation in artificial intelligence. Our mission is to harness the power of data and machine learning to revolutionize industries and create a brighter future. With a dynamic team of experts and a culture of relentless innovation, we are at the forefront of AI research and development. Position Overview: DeepLight is seeking an exceptional Data Engineer to join our team of AI specialists in the UAE. As an Expert Data Engineer, you will be responsible for designing, implementing, and optimizing data pipelines and infrastructure to support our cutting-edge AI systems. You will collaborate closely with our multidisciplinary team to ensure the efficient collection, storage, processing, and analysis of large-scale data, enabling us to unlock valuable insights and drive innovation across various domains. Requirements · Pipeline Development: Design, develop, and maintain scalable and reliable data pipelines to ingest, transform, and load diverse datasets from various sources, including structured and unstructured data, streaming data, and real-time feeds, with consideration for downstream AI/ML workloads. · Data Integration: Implement robust data integration processes to seamlessly integrate data from different sources, ensuring consistency, reliability, and data quality for analytics and AI use cases. · Data Storage: Design and optimize data storage solutions, including relational databases, NoSQL databases, data lakes, and cloud storage services, to efficiently store and manage large volumes of data for AI and machine learning model consumption. · Performance Optimization: Optimize data processing and query performance to enhance system scalability, reliability, and efficiency, leveraging techniques such as indexing, partitioning, caching, and parallel processing—especially for AI model training and inference pipelines. · Data Governance: Implement data governance frameworks to ensure data security, privacy, integrity, and compliance with regulatory requirements, including data encryption, access controls, and auditing—crucial for responsible AI deployment. · Monitoring and Maintenance: Monitor data pipelines and infrastructure components, proactively identify and address issues, and perform routine maintenance tasks to ensure system stability and reliability across AI and data science environments. · Collaboration: Collaborate closely with cross-functional teams, including data scientists, ML engineers, architects, and domain experts, to understand AI/ML requirements, gather insights, and deliver integrated, production-ready data solutions. · Documentation: Create comprehensive documentation, including technical specifications, data flow diagrams, and operational procedures, to facilitate understanding, collaboration, and knowledge sharing across AI and analytics teams. · Proven experience as a Data Engineer, with a track record of designing and implementing complex data pipelines and infrastructure solutions that support advanced analytics and AI initiatives. · Expertise in data modeling, ETL (Extract, Transform, Load) processes, and data warehousing concepts, with proficiency in SQL and scripting languages (e.g., Python, Scala)—especially in AI pipeline preparation and feature engineering. · Strong hands-on experience with big data technologies and frameworks, such as Hadoop, Spark, Kafka, and Flink, as well as cloud platforms (e.g., AWS, Azure, GCP) commonly used for AI workloads. · Familiarity with containerization and orchestration technologies, such as Docker and Kubernetes, and DevOps practices for CI/CD (Continuous Integration/Continuous Deployment), including ML model deployment and data pipeline automation. · Experience supporting AI/ML initiatives, including the preparation of training datasets, model input/output data flows, MLOps integration, and experimentation tracking. · Excellent analytical, problem-solving, and communication skills, with the ability to translate complex technical concepts into clear and actionable insights. · Proven ability to work effectively in a fast-paced, collaborative environment, with a passion for innovation, continuous learning, and contributing to AI-driven solutions. Benefits Why Join DeepLight? ·   Impact: Be part of a dynamic team that is shaping the future of AI and making a meaningful impact on industries and society. ·   Innovation: Work on cutting-edge projects at the intersection of AI, data engineering, and machine learning, leveraging the latest technologies and methodologies. ·   Collaboration: Collaborate with a diverse team of experts from various disciplines, fostering creativity, learning, and growth. ·   Opportunity: Enjoy ample opportunities for professional development, career advancement, and leadership roles in a rapidly growing company. ·   Culture: Join a culture of curiosity, excellence, and collaboration, where your ideas are valued, and your contributions are recognized and rewarded.   If you are passionate about data engineering, AI, and innovation, and you thrive in a dynamic and collaborative environment, we want to hear from you! Apply now to join DeepLight and be part of our journey to unlock the potential of AI for a brighter future.
Dubai - United Arab Emirates
Negotiable Salary
Workable
AI Data Engineer
We are hiring on behalf of our client – a newly launched family office – who is seeking an AI Data Engineer with strong AI integration expertise to help operationalize their financial models and data infrastructure. This role focuses on implementing and connecting existing tools—not building models or infrastructure from scratch. You will play a key role in automating workflows, managing data systems, and enabling real-time data access to improve decision-making processes. Key Responsibilities: Upgrade and manage enterprise API accounts across LLM providers (e.g., OpenAI, DeepSeek), with a focus on optimizing usage and cost. Automate orchestration of financial models using existing frameworks and tools (e.g., LangChain, Zapier). Migrate model documentation and data storage from mobile devices to cloud databases with version control. Build workflows to automatically combine multiple financial models (e.g., volatility, fundamental analysis) based on asset types and use cases. Integrate real-time financial data via Retrieval-Augmented Generation (RAG) systems to ensure model accuracy and relevance. Qualifications: Proven experience in data engineering and integrating AI/LLM tools into operational systems. Strong knowledge of API management, automation tools, and cloud infrastructure. Familiarity with LangChain, vector databases, and orchestration platforms is highly desirable. Previous experience working with financial data or supporting quant teams is a strong advantage. Excellent problem-solving skills and a proactive, self-starting attitude.
Dubai - United Arab Emirates
Negotiable Salary
Cookie
Cookie Settings
Our Apps
Download
Download on the
APP Store
Download
Get it on
Google Play
© 2025 Servanan International Pte. Ltd.