Data Platform Engineer, AI & Personalization (Remote, East Coast)
Build and optimize data pipelines powering AI personalization, working at the intersection of data engineering, system performance, and real world client impact.
Position Overview
Eagle Eye is an AI driven retail technology SaaS company powering personalized promotions and loyalty programs for leading global brands. In this role, you will sit at the heart of our platform building, optimizing, and supporting data systems that deliver high performance, real time personalization for enterprise clients.
As a Data Solutions Engineer, you will play a key role in deploying, operating, and continuously improving our data-driven platform for our clients.
You will work at the intersection of data engineering, system performance optimization, and client-facing technical operations, ensuring that our AI personalization solution runs reliably in production and delivers measurable value.
You will collaborate closely with Product Managers, Data Science, and Customer Success teams, and regularly interact with client technical teams.
The team “Personalized Challenges” is currently Europe-based and you will be the first North America based member. You will primarily communicate remotely with your direct team members in Europe but will also collaborate with our extensive team in North America who are based in Washington, DC, Toronto, Jacksonville and Chicago. Note that overall, Eagle Eye has a global presence, including North America, EMEA and APAC.
This is a United States based remote role with a preference for Eastern time zone candidates, open to applicants authorized to work without sponsorship.
Responsibilities
This role is intentionally a hybrid of responsibilities:
Hands-on Data Engineering – 60%
Continuous Optimization of Data-Driven Systems – 30%
Client-facing Technical Support & Ticket Resolution – 10%
Success in this role is measured by platform reliability, data quality, system performance, and the long-term resolution of production issues, rather than by volume of support tickets.
Platform Deployment & Data Integration
Integrate client data pipelines into our data stack
Deploy and configure our platform for new clients
Ensure data quality, consistency, and reliability across incoming and outgoing data flows
Production Support & Ticket Management
Investigate and resolve technical tickets related to data pipelines, system performance, and algorithm behavior
Act as a technical escalation point for Customer Success teams
Diagnose root causes, propose fixes, and ensure long-term prevention of recurring issues
Continuous Optimization & Performance Improvement
Analyze system and algorithm performance using metrics, logs, and experimentation
Identify opportunities to optimize data pipelines, processing logic, and algorithm configurations
Collaborate with Product and Data Science teams to prioritize and roll out improvements
Design and analyze A/B tests to measure the impact of changes
You Are
Disciplined problem-solver who enjoys digging into the "why" of system behavior to find long-term solutions.
An autonomous worker, ready to be the first North American member of the team while maintaining effective collaboration with European colleagues.
A clear, structured communicator capable of explaining complex technical issues to both engineers and non-technical stakeholders.
Rigorous and detail-oriented, especially when monitoring production systems and ensuring data integrity.
Comfortable navigating production incidents and support tickets with a calm, engineering-driven approach.
Curious and pragmatic, motivated by understanding real-world client use cases and optimizing system performance.
Comfortable working fully remotely: even if located near other team members, you will be remote from your direct colleagues based in France. You have proven experience thriving in a fully remote setup and collaborating across cultures.
Able to participate in a 1–2 week onboarding in Paris, offering dedicated time for in-person collaboration, learning, and team connection.
You Have
3-5 years of experience as a Data Engineer or Data Solutions Engineer in a production-heavy environment.
A bachelors degree in Computer Science, Data Engineering, or a related field.
Deep hands-on experience with Python and/or Scala.
Proven expertise using Spark for large-scale data processing.
Practical experience building and managing data stacks within Google Cloud Platform (GCP) and BigQuery.
A solid foundation in data engineering principles, including data pipeline design and system optimization.
A working knowledge of Data Science and Machine Learning concepts to help bridge the gap between data flows and algorithm performance.
- Department
- Technology
- Client Name
- Eagle Eye