Post job

Data Engineer jobs at Genesys - 27664 jobs

  • Senior AI Full Stack Developer

    Genesys 4.5company rating

    Data engineer job at Genesys

    Genesys empowers organizations of all sizes to improve loyalty and business outcomes by creating the best experiences for their customers and employees. Through Genesys Cloud, the AI-powered Experience Orchestration platform, organizations can accelerate growth by delivering empathetic, personalized experiences at scale to drive customer loyalty, workforce engagement, efficiency and operational improvements. We employ more than 6,000 people across the globe who embrace empathy and cultivate collaboration to succeed. And, while we offer great benefits and perks like larger tech companies, our employees have the independence to make a larger impact on the company and take ownership of their work. Join the team and create the future of customer experience together. Join the Innovation team at Genesys, where we're redefining customer experience by building innovative, scalable solutions integrated with Genesys Cloud. We're looking for a Senior AI Full Stack Developer to help design and deliver high-performance applications using AWS infrastructure. This is a unique opportunity to shape next-generation systems that impact customers around the globe. Key Responsibilities: Develop Agentic AI solutions using the most advanced solutions available on Genesys Cloud Develop integrations to enhance Agentic AI solutions Develop robust, scalable backend services and APIs using Go Build dynamic, user-friendly front-end components with JavaScript and TypeScript Deploy and manage containerized applications using Kubernetes on AWS Integrate features and services with Genesys Cloud APIs to enhance CX workflows Optimize application performance, scalability, and reliability in cloud environments Collaborate with product managers, architects, and UX designers to define requirements Write clean, maintainable, and well-documented code following engineering best practices Troubleshoot and resolve complex production issues in a timely manner Participate in peer code reviews and mentor junior engineers on the team Required Qualifications: Minimum of 5 years of professional software development experience 1+ year focused on AI/ML systems Proven expertise in agentic AI development, including hands-on experience with RAG, RAG pipelines and MCP framework Proficiency in Go for backend development Strong experience in system prompt engineering and tool use configuration and tuning Strong experience with JavaScript and TypeScript for front-end and full-stack applications Hands-on expertise with Kubernetes for orchestration and container lifecycle management Proven experience building and deploying cloud-native solutions on AWS (e.g., EC2, ECS, Lambda, S3) Solid understanding of microservices architecture and RESTful API design Strong analytical and problem-solving skills with attention to detail Bachelor's degree in Computer Science, Engineering, or a related discipline, or equivalent practical experience Preferred Qualifications: Experience with Genesys Cloud AI Guides Experience integrating with Genesys Cloud APIs or comparable customer experience platforms Exposure to UX design principles or experience developing intuitive user interfaces Familiarity with AWS-native services such as CloudFormation, Step Functions, or DynamoDB Demonstrated contributions to open-source projects or a strong GitHub portfolio Nice-to-Have: Experience with DevOps tooling including Docker, CI/CD pipelines, and Terraform Knowledge of Agile/Scrum methodologies and team collaboration practices Why Join Us: Drive innovation that redefines global customer experiences Thrive in a collaborative and inclusive environment with flexibility and autonomy Access to comprehensive benefits and career development opportunities Work with state-of-the-art technologies including AWS and Genesys Cloud Compensation: This role has a market-competitive salary with an anticipated base compensation range listed below. Actual salaries will vary depending on a candidate's experience, qualifications, skills, and location. This role might also be eligible for a commission or performance-based bonus opportunities. $80,200.00 - $149,000.00 Benefits: Medical, Dental, and Vision Insurance. Telehealth coverage Flexible work schedules and work from home opportunities Development and career growth opportunities Open Time Off in addition to 10 paid holidays 401(k) matching program Adoption Assistance Fertility treatments Click here to view a summary overview of our Benefits. If a Genesys employee referred you, please use the link they sent you to apply. About Genesys: Genesys empowers more than 8,000 organizations worldwide to create the best customer and employee experiences. With agentic AI at its core, Genesys Cloud™ is the AI-Powered Experience Orchestration platform that connects people, systems, data and AI across the enterprise. As a result, organizations can drive customer loyalty, growth and retention while increasing operational efficiency and teamwork across human and AI workforces. To learn more, visit **************** Reasonable Accommodations: If you require a reasonable accommodation to complete any part of the application process, or are limited in your ability to access or use this online application and need an alternative method for applying, you or someone you know may contact us at reasonable.accommodations@genesys.com. You can expect a response within 24-48 hours. To help us provide the best support, click the email link above to open a pre-filled message and complete the requested information before sending. If you have any questions, please include them in your email. This email is intended to support job seekers requesting accommodations. Messages unrelated to accommodation-such as application follow-ups or resume submissions-may not receive a response. Genesys is an equal opportunity employer committed to fairness in the workplace. We evaluate qualified applicants without regard to race, color, age, religion, sex, sexual orientation, gender identity or expression, marital status, domestic partner status, national origin, genetics, disability, military and veteran status, and other protected characteristics. Please note that recruiters will never ask for sensitive personal or financial information during the application phase.
    $80.2k-149k yearly Auto-Apply 1d ago
  • Job icon imageJob icon image 2

    Looking for a job?

    Let Zippia find it for you.

  • Machine Learning Infrastructure and Data Engineer

    Apple Inc. 4.8company rating

    Sunnyvale, CA jobs

    Sunnyvale, California, United States Machine Learning and AI The Video Computer Vision organization is working on exciting technologies for future Apple products. Our team delivers computer vision and machine learning algorithms that power many Apple technologies human understanding and human intelligence algorithms with applications for digital humans, health and AI. In this role, you will work closely with our team of experts in computer graphics, computer vision and machine learning to develop and extend pipeline infrastructure used for solving large-scale data challenges, to help deliver new algorithms for Apple products and bring high impact to millions of users. Description Join us as an ML Data and Infrastructure Engineer and become the architect behind the data infrastructure that power tomorrow's breakthrough AI/ML innovations. You'll be the critical link between ambitious algorithmic vision and real-world implementation-building the robust, scalable infrastructure that turns cutting-edge research into production-ready systems. We don't just use tools; we build them. You'll have complete ownership of the infrastructure that fuels our ML algorithms from conception to deployment, designing and orchestrating distributed compute systems that process massive datasets at scales few engineers ever encounter. Working shoulder-to-shoulder with AI/ML researchers and engineers in a small, agile team, you'll drive the creation of scalable infrastructure for ground truth data delivery, orchestrate massive distributed compute tasks across petabyte-scale datasets, develop novel validation frameworks, and help define strategic data collection approaches that push the boundaries of what's achievable. Your contributions won't just support algorithms-they'll directly shape product direction, unlock entirely new AI/ML capabilities, and define what's possible at Apple. Minimum Qualifications BS in computer science or related discipline and 3+ years of relevant industry experience. Experience developing or extending frameworks used for automating pipelines. Strong software engineering skills with extensive experience in Python. Strong foundational knowledge in Computer Science, with a deep understanding of algorithms and data structures. Preferred Qualifications MS or PhD computer science or related discipline, or 5+ years of related industry experience. Experience processing large, complex, unstructured data. Experience developing core infrastructure and frameworks for automating data pipelines. Excellent communication and experience working with multi-functional teams. Passion for delivering high quality software to end-users. Experience with geometry or computer vision algorithms. Self-motivated, with an ability to drive projects from concept to production, balancing requirements with technical quality and development timelines. At Apple, base pay is one part of our total compensation package and is determined within a range. This provides the opportunity to progress as you grow and develop within a role. The base pay range for this role is between $147,400 and $272,100, and your base pay will depend on your skills, qualifications, experience, and location. Apple employees also have the opportunity to become an Apple shareholder through participation in Apple's discretionary employee stock programs. Apple employees are eligible for discretionary restricted stock unit awards, and can purchase Apple stock at a discount if voluntarily participating in Apple's Employee Stock Purchase Plan. You'll also receive benefits including: Comprehensive medical and dental coverage, retirement benefits, a range of discounted products and free services, and for formal education related to advancing your career at Apple, reimbursement for certain educational expenses - including tuition. Additionally, this role might be eligible for discretionary bonuses or commission payments as well as relocation. Learn more about Apple Benefits. Note: Apple benefit, compensation and employee stock programs are subject to eligibility requirements and other terms of the applicable plan or program. Apple is an equal opportunity employer that is committed to inclusion and diversity. We seek to promote equal opportunity for all applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, Veteran status, or other legally protected characteristics. Learn more about your EEO rights as an applicant . Apple accepts applications to this posting on an ongoing basis. #J-18808-Ljbffr
    $147.4k-272.1k yearly 4d ago
  • Machine Learning Engineer - Backend/Data Engineer: Agentic Workflows

    Apple Inc. 4.8company rating

    Sunnyvale, CA jobs

    We design, build and maintain infrastructure to support agentic workflows for Siri. Our team is in charge of data generation, introspection and evaluation frameworks that are key to efficiently developing foundation models and agentic workflows for Siri applications. In this team you will have the opportunity to work at the intersection of with cutting edge foundation models and products. Minimum Qualifications Strong background in computer science: algorithms, data structures and system design 3+ year experience on large scale distributed system design, operation and optimization Experience with SQL/NoSQL database technologies, data warehouse frameworks like BigQuery/Snowflake/RedShift/Iceberg and data pipeline frameworks like GCP Dataflow/Apache Beam/Spark/Kafka Experience processing data for ML applications at scale Excellent interpersonal skills able to work independently as well as cross-functionally Preferred Qualifications Experience fine-tuning and evaluating Large Language Models Experience with Vector Databases Experience deploying and serving of LLMs At Apple, base pay is one part of our total compensation package and is determined within a range. This provides the opportunity to progress as you grow and develop within a role. The base pay range for this role is between $147,400 and $272,100, and your base pay will depend on your skills, qualifications, experience, and location. Apple employees also have the opportunity to become an Apple shareholder through participation in Apple's discretionary employee stock programs. Apple employees are eligible for discretionary restricted stock unit awards, and can purchase Apple stock at a discount if voluntarily participating in Apple's Employee Stock Purchase Plan. You'll also receive benefits including: Comprehensive medical and dental coverage, retirement benefits, a range of discounted products and free services, and for formal education related to advancing your career at Apple, reimbursement for certain educational expenses - including tuition. Additionally, this role might be eligible for discretionary bonuses or commission payments as well as relocation. Learn more about Apple Benefits. Note: Apple benefit, compensation and employee stock programs are subject to eligibility requirements and other terms of the applicable plan or program. Apple is an equal opportunity employer that is committed to inclusion and diversity. We seek to promote equal opportunity for all applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, Veteran status, or other legally protected characteristics. Learn more about your EEO rights as an applicant . #J-18808-Ljbffr
    $147.4k-272.1k yearly 19h ago
  • Senior Applications Consultant - Workday Data Consultant

    Capgemini 4.5company rating

    San Francisco, CA jobs

    Job Description - Senior Applications Consultant - Workday Data Consultant (054374) Senior Applications Consultant - Workday Data Consultant Qualifications & Experience: Certified in Workday HCM Experience in Workday data conversion At least one implementation as a data consultant Ability to work with clients on data conversion requirements and load data into Workday tenants Flexible to work across delivery landscape including Agile Applications Development, Support, and Deployment Valid US work authorization (no visa sponsorship required) 6‑8 years overall experience (minimum 2 years relevant), Bachelor's degree SE Level 1 certification; pursuing Level 2 Experience in package configuration, business analysis, architecture knowledge, technical solution design, vendor management Responsibilities: Translate business cases into detailed technical designs Manage operational and technical issues, translating blueprints into requirements and specifications Lead integration testing and user acceptance testing Act as stream lead guiding team members Participate as an active member within technology communities Capgemini is an Equal Opportunity Employer encouraging diversity and providing accommodations for disabilities. All qualified applicants will receive consideration without regard to race, national origin, gender identity or expression, age, religion, disability, sexual orientation, genetics, veteran status, marital status, or any other characteristic protected by law. Physical, mental, or environmental demands may be referenced. Reasonable accommodations will be considered where possible. #J-18808-Ljbffr
    $101k-134k yearly est. 1d ago
  • Senior Workday Data Consultant & Applications Lead

    Capgemini 4.5company rating

    San Francisco, CA jobs

    A leading consulting firm in San Francisco seeks a Senior Applications Consultant specializing in Workday Data Conversion. The ideal candidate will be certified in Workday HCM and have significant experience with data conversion processes. Responsibilities include translating business needs into technical designs, managing issues, and leading testing efforts. Candidates must possess a Bachelor's degree and a minimum of 6 years of experience, with at least 2 in a relevant role. This position requires valid US work authorization. #J-18808-Ljbffr
    $101k-134k yearly est. 1d ago
  • Marketing Data Scientist - Causal Inference & A/B Testing

    Amazon 4.7company rating

    San Francisco, CA jobs

    A leading audio entertainment service is seeking a Research Scientist to analyze marketing campaign effectiveness through causal models and experimental studies. This role involves collaborating with marketing teams and utilizing advanced statistical methods to enhance decisions and strategies. The ideal candidate holds a PhD or a Master's with significant quantitative research experience and proficiency in scripting languages like R or Python. Comprehensive benefits and competitive compensation are offered. #J-18808-Ljbffr
    $122k-176k yearly est. 19h ago
  • Delivery Consultant - Data Engineer, Data and Machine Learning, WWPS US Federal

    Amazon 4.7company rating

    Herndon, VA jobs

    The Amazon Web Services Professional Services (ProServe) team is seeking a skilled Delivery Consultant to join our team at Amazon Web Services (AWS). In this role, you'll work closely with customers to design, implement, and manage AWS solutions that meet their technical requirements and business objectives. You'll be a key player in driving customer success through their cloud journey, providing technical expertise and best practices throughout the project lifecycle. Possessing a deep understanding of AWS products and services, as a Delivery Consultant you will be proficient in architecting complex, scalable, and secure solutions tailored to meet the specific needs of each customer. You'll work closely with stakeholders to gather requirements, assess current infrastructure, and propose effective migration strategies to AWS. As trusted advisors to our customers, providing guidance on industry trends, emerging technologies, and innovative solutions, you will be responsible for leading the implementation process, ensuring adherence to best practices, optimizing performance, and managing risks throughout the project. The AWS Professional Services organization is a global team of experts that help customers realize their desired business outcomes when using the AWS Cloud. We work together with customer teams and the AWS Partner Network (APN) to execute enterprise cloud computing initiatives. Our team provides assistance through a collection of offerings which help customers achieve specific outcomes related to enterprise cloud adoption. We also deliver focused guidance through our global specialty practices, which cover a variety of solutions, technologies, and industries. It is expected to work from one of the above locations (or customer sites) Monday through Friday each week. This is not a remote position. You are expected to be in the office or with customers as needed. This position requires that the candidate selected be a US Citizen and must currently possess and maintain an active TS/SCI security clearance with polygraph. The position further requires the candidate to opt into a commensurate clearance for each government agency for which they perform AWS work. 10040 Key job responsibilities As an experienced technology professional, you will be responsible for: - Designing and implementing complex, scalable, and secure AWS solutions tailored to customer needs - Providing technical guidance and troubleshooting support throughout project delivery - Collaborating with stakeholders to gather requirements and propose effective migration strategies - Acting as a trusted advisor to customers on industry trends and emerging technologies - Sharing knowledge within the organization through mentoring, training, and creating reusable artifacts About the team Why AWS? Amazon Web Services (AWS) is the world's most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating - that's why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn't followed a traditional path, or includes alternative experiences, don't let it stop you from applying. Inclusive Team Culture Here at AWS, it's in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (diversity) conferences, inspire us to never stop embracing our uniqueness. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why flexible work hours and arrangements are part of our culture. When we feel supported in the workplace and at home, there's nothing we can't achieve in the cloud. Mentorship & Career Growth We're continuously raising our performance bar as we strive to become Earth's Best Employer. That's why you'll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Basic Qualifications - Bachelor's degree in Computer Science, Engineering, a related technical field or equivalent - 3+ years of cloud architecture and solution implementation experience - 3+ years of experience with data warehouse architecture, ETL/ELT tools, data engineering, and large-scale data manipulation using technologies like Spark, EMR, Hive, Kafka, and Redshift. - Experience with relational databases, SQL, and performance tuning, as well as software engineering best practices for the development lifecycle, including coding standards, reviews, source control, and testing. - Current, active US Government Security Clearance of TS/SCI with Polygraph. Preferred Qualifications - AWS Professional level certifications (e.g., Solutions Architect Professional, DevOps Engineer Professional) preferred. - Experience with automation and scripting (e.g., Terraform, Python). - Knowledge of security and compliance standards (e.g., HIPAA, GDPR). - Experience leading large-scale data engineering and analytics projects using AWS technologies like Redshift, S3, Glue, EMR, Kinesis, Firehose, and Lambda. - Experience with non-relational databases and implementing data governance solutions. Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit ********************************************************* for more information. If the country/region you're applying in isn't listed, please contact your Recruiting Partner. Our compensation reflects the cost of labor across several US geographic markets. The base pay for this position ranges from $118,200/year in our lowest geographic market up to $204,300/year in our highest geographic market. Pay is based on a number of factors including market location and may vary depending on job-related knowledge, skills, and experience. Amazon is a total compensation company. Dependent on the position offered, equity, sign-on payments, and other forms of compensation may be provided as part of a total compensation package, in addition to a full range of medical, financial, and/or other benefits. For more information, please visit ******************************************************* . This position will remain posted until filled. Applicants should apply via our internal or external career site.
    $118.2k-204.3k yearly 7d ago
  • Delivery Consultant - Data Engineer, Data and Machine Learning, WWPS US Federal

    Amazon.com, Inc. 4.7company rating

    Arlington, VA jobs

    The Amazon Web Services Professional Services (ProServe) team is seeking a skilled Delivery Consultant to join our team at Amazon Web Services (AWS). In this role, you'll work closely with customers to design, implement, and manage AWS solutions that meet their technical requirements and business objectives. You'll be a key player in driving customer success through their cloud journey, providing technical expertise and best practices throughout the project lifecycle. Possessing a deep understanding of AWS products and services, as a Delivery Consultant you will be proficient in architecting complex, scalable, and secure solutions tailored to meet the specific needs of each customer. You'll work closely with stakeholders to gather requirements, assess current infrastructure, and propose effective migration strategies to AWS. As trusted advisors to our customers, providing guidance on industry trends, emerging technologies, and innovative solutions, you will be responsible for leading the implementation process, ensuring adherence to best practices, optimizing performance, and managing risks throughout the project. The AWS Professional Services organization is a global team of experts that help customers realize their desired business outcomes when using the AWS Cloud. We work together with customer teams and the AWS Partner Network (APN) to execute enterprise cloud computing initiatives. Our team provides assistance through a collection of offerings which help customers achieve specific outcomes related to enterprise cloud adoption. We also deliver focused guidance through our global specialty practices, which cover a variety of solutions, technologies, and industries. It is expected to work from one of the above locations (or customer sites) Monday through Friday each week. This is not a remote position. You are expected to be in the office or with customers as needed. This position requires that the candidate selected be a US Citizen and must currently possess and maintain an active TS/SCI security clearance with polygraph. The position further requires the candidate to opt into a commensurate clearance for each government agency for which they perform AWS work. 10040 Key job responsibilities As an experienced technology professional, you will be responsible for: Designing and implementing complex, scalable, and secure AWS solutions tailored to customer needs Providing technical guidance and troubleshooting support throughout project delivery Collaborating with stakeholders to gather requirements and propose effective migration strategies Acting as a trusted advisor to customers on industry trends and emerging technologies Sharing knowledge within the organization through mentoring, training, and creating reusable artifacts About the team Why AWS? Amazon Web Services (AWS) is the world's most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating - that's why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn't followed a traditional path, or includes alternative experiences, don't let it stop you from applying. Inclusive Team Culture Here at AWS, it's in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (diversity) conferences, inspire us to never stop embracing our uniqueness. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why flexible work hours and arrangements are part of our culture. When we feel supported in the workplace and at home, there's nothing we can't achieve in the cloud. Mentorship & Career Growth We're continuously raising our performance bar as we strive to become Earth's Best Employer. That's why you'll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Basic Qualifications Bachelor's degree in Computer Science, Engineering, a related technical field or equivalent 3+ years of cloud architecture and solution implementation experience 3+ years of experience with data warehouse architecture, ETL/ELT tools, data engineering, and large-scale data manipulation using technologies like Spark, EMR, Hive, Kafka, and Redshift. Experience with relational databases, SQL, and performance tuning, as well as software engineering best practices for the development lifecycle, including coding standards, reviews, source control, and testing. Current, active US Government Security Clearance of TS/SCI with Polygraph. Preferred Qualifications AWS Professional level certifications (e.g., Solutions Architect Professional, DevOps Engineer Professional) preferred. Experience with automation and scripting (e.g., Terraform, Python). Knowledge of security and compliance standards (e.g., HIPAA, GDPR). Experience leading large-scale data engineering and analytics projects using AWS technologies like Redshift, S3, Glue, EMR, Kinesis, Firehose, and Lambda. Experience with non-relational databases and implementing data governance solutions. Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit ********************************************************* for more information. If the country/region you're applying in isn't listed, please contact your Recruiting Partner. Our compensation reflects the cost of labor across several US geographic markets. The base pay for this position ranges from $118,200/year in our lowest geographic market up to $204,300/year in our highest geographic market. Pay is based on a number of factors including market location and may vary depending on job-related knowledge, skills, and experience. Amazon is a total compensation company. Dependent on the position offered, equity, sign-on payments, and other forms of compensation may be provided as part of a total compensation package, in addition to a full range of medical, financial, and/or other benefits. For more information, please visit ******************************************************** This position will remain posted until filled. Applicants should apply via our internal or external career site.
    $118.2k-204.3k yearly 7d ago
  • Delivery Consultant - Data Engineer, Data and Machine Learning, WWPS US Federal

    Amazon 4.7company rating

    Arlington, VA jobs

    The Amazon Web Services Professional Services (ProServe) team is seeking a skilled Delivery Consultant to join our team at Amazon Web Services (AWS). In this role, you'll work closely with customers to design, implement, and manage AWS solutions that meet their technical requirements and business objectives. You'll be a key player in driving customer success through their cloud journey, providing technical expertise and best practices throughout the project lifecycle. Possessing a deep understanding of AWS products and services, as a Delivery Consultant you will be proficient in architecting complex, scalable, and secure solutions tailored to meet the specific needs of each customer. You'll work closely with stakeholders to gather requirements, assess current infrastructure, and propose effective migration strategies to AWS. As trusted advisors to our customers, providing guidance on industry trends, emerging technologies, and innovative solutions, you will be responsible for leading the implementation process, ensuring adherence to best practices, optimizing performance, and managing risks throughout the project. The AWS Professional Services organization is a global team of experts that help customers realize their desired business outcomes when using the AWS Cloud. We work together with customer teams and the AWS Partner Network (APN) to execute enterprise cloud computing initiatives. Our team provides assistance through a collection of offerings which help customers achieve specific outcomes related to enterprise cloud adoption. We also deliver focused guidance through our global specialty practices, which cover a variety of solutions, technologies, and industries. It is expected to work from one of the above locations (or customer sites) Monday through Friday each week. This is not a remote position. You are expected to be in the office or with customers as needed. This position requires that the candidate selected be a US Citizen and must currently possess and maintain an active TS/SCI security clearance with polygraph. The position further requires the candidate to opt into a commensurate clearance for each government agency for which they perform AWS work. 10040 Key job responsibilities As an experienced technology professional, you will be responsible for: - Designing and implementing complex, scalable, and secure AWS solutions tailored to customer needs - Providing technical guidance and troubleshooting support throughout project delivery - Collaborating with stakeholders to gather requirements and propose effective migration strategies - Acting as a trusted advisor to customers on industry trends and emerging technologies - Sharing knowledge within the organization through mentoring, training, and creating reusable artifacts About the team Why AWS? Amazon Web Services (AWS) is the world's most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating - that's why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn't followed a traditional path, or includes alternative experiences, don't let it stop you from applying. Inclusive Team Culture Here at AWS, it's in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (diversity) conferences, inspire us to never stop embracing our uniqueness. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why flexible work hours and arrangements are part of our culture. When we feel supported in the workplace and at home, there's nothing we can't achieve in the cloud. Mentorship & Career Growth We're continuously raising our performance bar as we strive to become Earth's Best Employer. That's why you'll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Basic Qualifications - Bachelor's degree in Computer Science, Engineering, a related technical field or equivalent - 3+ years of cloud architecture and solution implementation experience - 3+ years of experience with data warehouse architecture, ETL/ELT tools, data engineering, and large-scale data manipulation using technologies like Spark, EMR, Hive, Kafka, and Redshift. - Experience with relational databases, SQL, and performance tuning, as well as software engineering best practices for the development lifecycle, including coding standards, reviews, source control, and testing. - Current, active US Government Security Clearance of TS/SCI with Polygraph. Preferred Qualifications - AWS Professional level certifications (e.g., Solutions Architect Professional, DevOps Engineer Professional) preferred. - Experience with automation and scripting (e.g., Terraform, Python). - Knowledge of security and compliance standards (e.g., HIPAA, GDPR). - Experience leading large-scale data engineering and analytics projects using AWS technologies like Redshift, S3, Glue, EMR, Kinesis, Firehose, and Lambda. - Experience with non-relational databases and implementing data governance solutions. Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit ********************************************************* for more information. If the country/region you're applying in isn't listed, please contact your Recruiting Partner. Our compensation reflects the cost of labor across several US geographic markets. The base pay for this position ranges from $118,200/year in our lowest geographic market up to $204,300/year in our highest geographic market. Pay is based on a number of factors including market location and may vary depending on job-related knowledge, skills, and experience. Amazon is a total compensation company. Dependent on the position offered, equity, sign-on payments, and other forms of compensation may be provided as part of a total compensation package, in addition to a full range of medical, financial, and/or other benefits. For more information, please visit ******************************************************* . This position will remain posted until filled. Applicants should apply via our internal or external career site.
    $118.2k-204.3k yearly 7d ago
  • Foundry Data Engineer: ETL Automation & Dashboards

    Data Freelance Hub 4.5company rating

    San Francisco, CA jobs

    A data consulting firm based in San Francisco is seeking a Palantir Foundry Consultant for a contract position. The ideal candidate should have strong experience in Palantir Foundry, SQL, and PySpark, with proven skills in data pipeline development and ETL automation. Responsibilities include building data pipelines, implementing interactive dashboards, and leveraging data analysis for actionable insights. This on-site role offers an excellent opportunity for those experienced in the field. #J-18808-Ljbffr
    $114k-160k yearly est. 3d ago
  • GenAI Engineer-Data Scientist

    Capgemini 4.5company rating

    Seattle, WA jobs

    Choosing Capgemini means choosing a company where you will be empowered to shape your career in the way you'd like, where you'll be supported and inspired by a collaborative community of colleagues around the world, and where you'll be able to reimagine what's possible. Join us and help the world's leading organizations unlock the value of technology and build a more sustainable, more inclusive world. About the job you're considering We are seeking a passionate and innovative GenAI Engineer/Data Scientist to join our team. This role involves developing GEN AI solutions and predictive AI models, deploying them in production environments, and driving the integration of AI technologies across our business operations. As a key member of our AI team, you will collaborate with diverse teams to design solutions that deliver tangible business value through AI-driven insights. Your Role Familiarity with API architecture, and components such as external interfacing, traffic control, runtime execution of business logic, data access, authentication, deployment. Key skills to include Understanding of URLs and API Endpoints, HTTP Requests, Authentication Methods, Response Types, JSON/REST, Parameters and Data Filtering, Error Handling, Debugging, Rate Limits, Tokens, Integration, and Documentation. Develop generative and predictive AI models (including NLP, computer vision, etc.). Familiarity with cloud platforms (e.g., Azure, AWS, GCP) and big data tools (e.g., Databricks, PySpark) to develop AI solutions. Familiarity with intelligent autonomous agents for complex tasks and multimodal interactions. Familiarity with agentic workflows that utilize AI agents to automate tasks and improve operational efficiency. Deploy AI models into production environments, ensuring scalability, performance, and optimization. Monitor and troubleshoot deployed models and pipelines for optimal performance. Design and maintain data pipelines for efficient data collection, processing, and storage (e.g., data lakes, data warehouses). Required Qualifications: Minimum of 1 year of professional experience in AI, application development, machine learning, or a similar role. Experience in model deployment, MLOps, model monitoring, and managing data/model drift. Experience with predictive AI (e.g., regression, classification, clustering) and generative AI models (e.g., GPT, Claude LLM, Stable Diffusion). Bachelor's or greater degree in Machine Learning, AI, or equivalent professional experience. Minimum of 1 year of professional experience in AI, application development, machine learning, or a similar role. Experience in model deployment, MLOps, model monitoring, and managing data/model drift. Experience with predictive AI (e.g., regression, classification, clustering) and generative AI models (e.g., GPT, Claude LLM, Stable Diffusion). Capgemini offers a comprehensive, non-negotiable benefits package to all regular, full-time employees. In the U.S. and Canada, available benefits are determined by local policy and eligibility Paid time off based on employee grade (A-F), defined by policy: Vacation: 12-25 days, depending on grade, Company paid holidays, Personal Days, Sick Leave Medical, dental, and vision coverage (or provincial healthcare coordination in Ca Retirement savings plans (e.g., 401(k) in the U.S., RRSP in Ca Life and disability insurance Employee assistance pro Other benefits as provided by local policy and eligib
    $96k-127k yearly est. 3d ago
  • Data Scientist Gen AI

    EXL 4.5company rating

    Atlanta, GA jobs

    Job Title: Data Scientist - GenAI Work Experience: 5+ Years On-site requirement: 4 days per week at Atlanta office We are looking for a highly capable and innovative Data Scientist with experience in Generative AI to join our Data Science Team. You will lead the development and deployment of GenAI solutions, including LLM-based applications, prompt engineering, fine-tuning, embeddings, and retrieval-augmented generation (RAG) for enterprise use cases. The ideal candidate has a strong foundation in machine learning and NLP, with hands-on experience in modern GenAI tools and frameworks such as OpenAI, LangChain, Hugging Face, Vertex AI, Bedrock, or similar. Key Responsibilities: Design and build Generative AI solutions using Large Language Models (LLMs) for business problems across domains like customer service, document automation, summarization, and knowledge retrieval. Fine-tune or adapt foundation models using domain-specific data. Implement RAG pipelines, embedding models, vector databases (e.g., FAISS, Pinecone, ChromaDB). Collaborate with data engineers, MLOps, and product teams to build end-to-end AI applications and APIs. Develop custom prompts and prompt chains using tools like LangChain, LlamaIndex, PromptFlow, or custom frameworks. Evaluate model performance, mitigate bias, and optimize accuracy, latency, and cost. Stay up to date with the latest trends in LLMs, transformers, and GenAI architecture. Required Skills: 5+ years of experience in Data Science / ML, with 1+ year hands-on in LLMs / GenAI projects. Strong Python programming skills, especially in libraries such as Transformers, LangChain, scikit-learn, PyTorch, or TensorFlow. Experience with OpenAI (GPT-4), Claude, Mistral, LLaMA, or similar models. Knowledge of vector search, embedding models (e.g., BERT, Sentence Transformers), and semantic search techniques. Ability to build scalable AI workflows and deploy them via APIs or web apps (e.g., FastAPI, Streamlit, Flask). Familiarity with cloud platforms (AWS/GCP/Azure) and MLOps best practices. Excellent communication skills with the ability to translate technical solutions into business impact. Preferred Qualifications: Experience with prompt tuning, few-shot learning, or LoRA-based fine-tuning. Knowledge of data privacy and security considerations in GenAI applications. Familiarity with enterprise architecture, SDLC, or building GenAI use cases in regulated domains (e.g., finance, insurance, healthcare). For more information on benefits and what we offer please visit us at **************************************************
    $67k-91k yearly est. 5d ago
  • Data Engineer/Architect

    Sigmoid 4.0company rating

    Dallas, TX jobs

    Analytics: Sigmoid unlocks business value for Fortune 1000 companies through expert data engineering, data science, and AI consulting. We solve complex challenges for leaders in CPG, Retail, BFSI, and Life Sciences. With 10+ delivery centers in the USA, Canada, UK, Europe, Singapore, and India, we deliver cutting-edge data modernization and generative AI solutions. Join us to shape the future of data-driven innovation! Why Sigmoid? Inc. 500 Fastest-Growing Companies (4 years running) Deloitte Technology Fast 500 (4 years running) Top Employer: AIM's 50 Best Firms for Data Scientists Recently named British Data Awards Finalist & more accolades on the way! Accelerate your career with a fast-growing, innovative company. Apply now and be part of our award-winning team! Job Description: **5 Days work from Dallas Office ** No HYBRID Align Sigmoid with key Client initiatives Interface daily with customers across leading Fortune 500 companies to understand strategic requirements Connect with CIO, VP and Director level clients on a regular basis. Ability to understand business requirements and tie them to technology solutions Build a delivery plan with domain experts and stay on track Design, develop and evolve highly scalable and fault-tolerant distributed components using Big data technologies. Excellent experience in Application development and support, integration development and data management. Build team and manage it day to day basis Play the key role of hiring manager to build the future of Sigmoid. Guiding developers in day to day design and coding tasks, stepping in to code if needed. Define your team structure, Hire and train your team as needed Stay up-to-date on the latest technology to ensure the greatest ROI for customer & Sigmoid Hands on coder with good understanding on enterprise level code Design and implement APIs, abstractions and integration patterns to solve challenging distributed computing problems Experience in defining technical requirements, data extraction, data transformation, automating jobs, productionizing jobs, and exploring new big data technologies within a Parallel Processing environment Culture Must be a strategic thinker with the ability to think unconventional / out:of:box. Analytical and data driven orientation. Raw intellect, talent and energy are critical. Entrepreneurial and Agile : understands the demands of a private, high growth company. Ability to be both a leader and hands on “doer. ” Facilitate in Technical Aspects Help Technical architects understand clients business constraints while drawing out architectures Build a business case around highly scalable and fault-tolerant distributed components designed by technical architects Lead integration effort onsite with the client technical teams Understand client acceptance criteria and emulate that for offshore technical teams Receive and monitor technical deliverables from the offshore team and present to client teams Qualifications: 8-16+ years track record of relevant work experience and a Computer Science or related technical discipline is required Dynamic leader who has directly managed team of highly competent developers in fast paced work environment Experience in Architecture and delivery of Enterprise scale applications. Lead and execute data migration projects, especially migrations to AWS Architect and develop data warehouses using modern cloud technologies Handle data project migrations from legacy systems (including Hadoop-based platforms) to cloud environments Work extensively with Snowflake and Databricks for data processing and analytics Preferred Qualification: Architecting, developing, implementing and maintaining Big Data solutions Experience with database modeling and development, data mining and warehousing. Experience with Hadoop ecosystem (HDFS, MapReduce, Hive, Impala, Spark, Kerberos, KAFKA, etc) **5 Days work from Dallas Office ** No HYBRID
    $83k-116k yearly est. 1d ago
  • Sr Data Engineer-ETL

    Infovision Inc. 4.4company rating

    Denver, CO jobs

    Job Title: Sr. Data Engineer-ETL Duration: Long-term Main Skill: Over 10+ years of experience in the Software Development Industry. We need data Engineering exp - building ETLS using spark and sql, real time and batch pipelines using Kafka/firehose, experience with building pipelines with data bricks/snowflake, experience with ingesting multiple data formats like json/parquet/delta etc. Job Description: About You You have a BS or MS in Computer Science or similar relevant field You work well in a collaborative, team-based environment You are an experienced engineering with 3+ years of experience You have a passion for big data structures You possess strong organizational and analytical skills related to working with structured and unstructured data operations You have experience implementing and maintaining high performance / high availability data structures You are most comfortable operating within cloud based eco systems You enjoy leading projects and mentoring other team members Specific Skills: Over 10 years of experience in the Software Development Industry. Experience or knowledge of relational SQL and NoSQL databases High proficiency in Python, Pyspark, SQL and/or Scala Experience in designing and implementing ETL processes Experience in managing data pipelines for analytics and operational use Strong understanding of in-memory processing and data formats (Avro, Parquet, Json etc.) Experience or knowledge of AWS cloud services: EC2, MSK, S3, RDS, SNS, SQS Experience or knowledge of stream-processing systems: i.e., Storm, Spark-Structured-Streaming, Kafka consumers. Experience or knowledge of data pipeline and workflow management tools: i.e., Apache Airflow, AWS Data Pipeline Experience or knowledge of big data tools: i.e., Hadoop, Spark, Kafka. Experience or knowledge of software engineering tools/practices: i.e., Github, VSCode, CI/CD Experience or knowledge in data observability and monitoring Hands-on experience in designing and maintaining data schema life-cycles. Bonus - Experience in tools like Databricks, Snowflake and Thoughtspot
    $71k-94k yearly est. 2d ago
  • Data Engineer

    Talent Software Services 3.6company rating

    Los Angeles, CA jobs

    Are you an experienced Data Engineer with a desire to excel? If so, then Talent Software Services may have the job for you! Our client is seeking an experienced Data Engineer to work at their company in Melvindale, MI. Conducts data integration and analytics projects that automate data collection, transformation, storage, delivery, and reporting processes. Ensures optimization of data retrieval and processing, including performance tuning, delivery design for down-stream analytics, machine learning modelling, feature engineering, and reporting. Works across multiple areas/teams to develop data integration methods that advance enterprise data and reporting capabilities. Ability to work independently and identify appropriate course of action to analyze issues, recommend solutions and administer programs. Primary Responsibilities/Accountabilities: Develops data sets and automated pipelines (Microsoft ADF / ADX) that support data requirements for process improvement and operational efficiency metrics Builds reporting and visualizations that utilize data pipeline to provide actionable insights into compliance rates, operational efficiency, and other key business performance metrics Ad hoc and project analysis - gathering and reviewing data, making recommendations, and performing follow-through activities (scheduling meetings, modifying data, additional analysis, reporting changes) Supporting workload management activities via workforce planning, maintenance, recommendations & tracking through standard processing and systems (Power BI, Power Apps, Power Automate) Write/execute queries in databases Create visual representations of data sets in quick and understandable means to accurately reflect the dataset Gathering requirements to execute reports/projects/analysis Qualifications: Minimum: Bachelor's Degree (Computer Science/Information Systems/ Software Engineering, etc,), Microsoft Azure ADF/ADX (ETL processes & KQL) Preferred: Databricks (PySpark, SQL), SQL Server Querying
    $92k-129k yearly est. 3d ago
  • Data Scientist

    Talent Software Services 3.6company rating

    Novato, CA jobs

    Are you an experienced Data Scientist with a desire to excel? If so, then Talent Software Services may have the job for you! Our client is seeking an experienced Data Scientist to work at their company in Novato, CA. Client's Data Science is responsible for designing, capturing, analyzing, and presenting data that can drive key decisions for Clinical Development, Medical Affairs, and other business areas of Client. With a quality-by-design culture, Data Science builds quality data that is fit-for-purpose to support statistically sound investigation of critical scientific questions. The Data Science team develops solid analytics that are visually relevant and impactful in supporting key data-driven decisions across Client. The Data Management Science (DMS) group contributes to Data Science by providing complete, correct, and consistent analyzable data at data, data structure and documentation levels following international standards and GCP. The DMS Center of Risk Based Quality Management (RBQM) sub-function is responsible for the implementation of a comprehensive, cross-functional strategy to proactively manage quality risks for clinical trials. Starting at protocol development, the team collaborates to define critical-to-quality factors, design fit-for-purpose quality strategies, and enable ongoing oversight through centralized monitoring and data-driven risk management. The RBQM Data Scientist supports central monitoring and risk-based quality management (RBQM) for clinical trials. This role focuses on implementing and running pre-defined KRIs, QTLs, and other risk metrics using clinical data, with strong emphasis on SAS programming to deliver robust and scalable analytics across multiple studies. Primary Responsibilities/Accountabilities: The RBQM Data Scientist may perform a range of the following responsibilities, depending upon the study's complexity and the study's development stage: Implement and maintain pre-defined KRIs, QTLs, and triggers using robust SAS programs/macros across multiple clinical studies. Extract, transform, and integrate data from EDC systems (e.g., RAVE) and other clinical sources into analysis-ready SAS datasets. Run routine and ad-hoc RBQM/central monitoring outputs (tables, listings, data extracts, dashboard feeds) to support signal detection and study review. Perform QC and troubleshooting of SAS code; ensure outputs are accurate and efficient. Maintain clear technical documentation (specifications, validation records, change logs) for all RBQM programs and processes. Collaborate with Central Monitors, Central Statistical Monitors, Data Management, Biostatistics, and Study Operations to understand requirements and ensure correct implementation of RBQM metrics. Qualifications: PhD, MS, or BA/BS in statistics, biostatistics, computer science, data science, life science, or a related field. Relevant clinical development experience (programming, RBM/RBQM, Data Management), for example: PhD: 3+ years MS: 5+ years BA/BS: 8+ years Advanced SAS programming skills (hard requirement) in a clinical trials environment (Base SAS, Macro, SAS SQL; experience with large, complex clinical datasets). Hands-on experience working with clinical trial data.•Proficiency with Microsoft Word, Excel, and PowerPoint. Technical - Preferred / Strong Plus Experience with RAVE EDC. Awareness or working knowledge of CDISC, CDASH, SDTM standards. Exposure to R, Python, or JavaScript and/or clinical data visualization tools/platforms. Preferred: Knowledge of GCP, ICH, FDA guidance related to clinical trials and risk-based monitoring. Strong analytical and problem-solving skills; ability to interpret complex data and risk outputs. Effective communication and teamwork skills; comfortable collaborating with cross-functional, global teams. Ability to manage multiple programming tasks and deliver high-quality work in a fast-paced environment.
    $99k-138k yearly est. 4d ago
  • Data Engineer

    Zillion Technologies, Inc. 3.9company rating

    Saint Louis, MO jobs

    We're seeking an experienced Data Engineer to help design and build a cloud-native big data analytics platform on AWS. You'll work in an agile engineering team alongside data scientists and engineers to develop scalable data pipelines, analytics, and visualization capabilities. Key Highlights: Build and enhance data pipelines and analytics using Python, R, and AWS services (Glue, Lambda, Redshift, EMR, QuickSight, SageMaker) Design and support big data solutions leveraging Spark, Hadoop, and Redshift Apply DevOps and Infrastructure as Code practices (Terraform, Ansible, AWS CDK) Collaborate cross-functionally to align data architecture with business goals Support security, quality, and operational excellence initiatives Requirements: 7+ years of data engineering experience Strong AWS cloud and big data background Experience with containerization (EKS/ECR), APIs, and Linux Location: Hybrid in St. Louis, MO area (onsite 2-3 days)
    $71k-97k yearly est. 5d ago
  • Staff Machine Learning Data Engineer

    Backflip 3.7company rating

    San Francisco, CA jobs

    Mechanical design, the work done in CAD, is the rate-limiter for progress in the physical world. However, there are only 2-4 million people on Earth who know how to CAD. But what if hundreds of millions could? What if creating something in the real world were as easy as imagining the use case, or sketching it on paper? Backflip is building a foundation model for mechanical design: unifying the world's scattered engineering knowledge into an intelligent, end-to-end design environment. Our goal is to enable anyone to imagine a solution and hit “print.” Founded by a second-time CEO in the same space (first company: Markforged), Backflip combines deep industry insight with breakthrough AI research. Backed by a16z and NEA, we raised a $30M Series A and built a deeply technical, mission-driven team. We're building the AI foundation that tomorrow's space elevators, nanobots, and spaceships will be built in. If you're excited to define the next generation of hard tech, come build it with us. The Role We're looking for a Staff Machine Learning Data Engineer to lead and build the data pipelines powering Backflip's foundation model for manufacturing and CAD. You'll design the systems, tools, and strategies that turn the world's engineering knowledge - text, geometry, and design intent - into high-quality training data. This is a core leadership role within the AI team, driving the data architecture, augmentation, and evaluation that underpin our model's performance and evolution. You'll collaborate with Machine Learning Engineers to run data-driven experiments, analyze results, and deliver AI products that shape the future of the physical world. What You'll Do Architect and own Backflip's ML data pipeline, from ingestion to processing to evaluation. Define data strategy: establish best practices for data augmentation, filtering, and sampling at scale. Design scalable data systems for multimodal training (text, geometry, CAD, and more). Develop and automate data collection, curation, and validation workflows. Collaborate with MLEs to design and execute experiments that measure and improve model performance. Build tools and metrics for dataset analysis, monitoring, and quality assurance. Contribute to model development through insights grounded in data, shaping what, how, and when we train. Who You Are You've built and maintained ML data pipelines at scale, ideally for foundation or generative models, that shipped into production in the real world. You have deep experience with data engineering for ML, including distributed systems, data extraction, transformation, and loading, and large-scale data processing (e.g. PySpark, Beam, Ray, or similar). You're fluent in Python and experienced with ML frameworks and data formats (Parquet, TFRecord, HuggingFace datasets, etc.). You've developed data augmentation, sampling, or curation strategies that improved model performance. You think like both an engineer and an experimentalist: curious, analytical, and grounded in evidence. You collaborate well across AI development, infra, and product, and enjoy building the data systems that make great models possible. You care deeply about data quality, reproducibility, and scalability. You're excited to help shape the future of AI for physical design. Bonus points if: You are comfortable working with a variety of complex data formats, e.g. for 3D geometry kernels or rendering engines. You have an interest in math, geometry, topology, rendering, or computational geometry. You've worked in 3D printing, CAD, or computer graphics domains. Why Backflip This is a rare opportunity to own the data backbone of a frontier foundation model, and help define how AI learns to design the physical world. You'll join a world-class, mission-driven team operating at the intersection of research, engineering, and deep product sense, building systems that let people design the physical world as easily as they imagine it. Your work will directly shape the performance, capability, and impact of Backflip's foundation model, the core of how the world will build in the future. Let's build the tools the future will be made in. #J-18808-Ljbffr
    $126k-178k yearly est. 4d ago
  • Healthcare Cloud Data Transformation Lead

    Sogeti 4.7company rating

    Minneapolis, MN jobs

    About the job We are seeking a highly skilled Cloud Data Transformation Architect/Leader to lead the design, implementation, and optimization of large-scale cloud-based data platforms and transformation initiatives. The architect will play a critical role in defining data strategy and roadmap, modernizing legacy environments, and enabling advanced analytics and AI/ML capabilities. This position requires deep expertise in cloud ecosystems, data integration, governance, and performance optimization, along with strong leadership in guiding cross-functional teams. The Transformation leader will drive the implementation roadmap and work with client leadership to demonstrate the ROI of the modernized platform and transformation roadmap. What you will do at Sogeti Advisory Consulting, Collaboration & Leadership Provide leadership and advisory consulting to client data and analytics leadership Partner with business stakeholders, data leaders, data engineers, and analysts to understand business needs and align with data capabilities Provide CXO level status reporting and communication along with ROI and NPV measurement/reporting Provide technical leadership, mentorship, and best practices for data engineering teams. Serve as the subject matter expert on cloud data and analytics transformation. Data Strategy, Roadmap & Architecture Define and maintain the enterprise cloud data architecture, strategy and roadmap aligned with business objectives. Define key metrics to measure progress and ROI from the roadmap Design end-to-end cloud data ecosystems (data lakes, data warehouses, lakehouses, streaming pipelines). Design metadata-driven architecture and open table formats (Delta Lake, Apache Hudi, and Apache Iceberg) Experienced in implementing cost management measures on Cloud data platforms Evaluate emerging technologies and recommend adoption strategies. Data Transformation & Migration Lead the modernization of on-premises data platforms to cloud-native architectures. Architect scalable, secure, and high-performance ETL/ELT and real-time data pipelines. Ensure seamless integration of structured, semi-structured, and unstructured data. Governance & Security Implement data governance, lineage, cataloging, and quality frameworks. Ensure compliance with regulatory standards (GDPR, HIPAA, SOC 2, etc.). Define data security models for data access, encryption, and masking. BI Modernization Lead the modernization of legacy BI platforms to Power BI Architect and develop Semantic Layer needed for consumption layer - BI, AI etc Optimization & Innovation Drive performance tuning, cost optimization, and scalability of cloud data platforms. Explore opportunities to leverage AI/ML, advanced analytics, and automation in data transformation. Establish reusable frameworks and accelerators for faster delivery. Data Operations: Definition of SLA's/KPI's for Data Platform operations along with the client Tracking and reporting of SLA's/KPI's to executive leadership Identify and rollout solutions for improving SLA/KPI adherence What you will bring Experience: 18+ years in data engineering/architecture, with 5+ years in leading enterprise-scale cloud data transformations. Experience in Healthcare Payer industry Experience defining Enterprise level data strategy and roadmap and driving the implementation for at-least 3 enterprise clients Experience with playing key advisory role for client data and analytics leadership (CDO and direct reports) Hands-on expertise with at least one major cloud provider - Azure is must have AWS,GCP is good to have Experienced in implementing Snowflake on Azure as well as Medallion lakehouse Architecture using Databricks and MS Fabric using open table formats Experienced in various data modelling techniques and standards for cloud data warehouses Experienced in designing and implementing high performing data pipelines; performance tuning expertise is required Experienced with Data Governance implementation with focus on Metadata Management using Alation and Data Quality using Industry standard tools Experienced with BI Modernization from legacy BI platforms to Power BI - Big plus Technical Skills: Data platforms: Snowflake - Must have, Databricks, MS Fabric -Big Plus Synapse, Redshift, Big Query - Good to have Data integration: Azure Data Factory, DBT, Snowpipe, Informatica Power Center, IDMC CDI, Matillion, Kafka Programming: SQL, Python, SnowSQL, SnowPark, PySpark, Scala, or Java. Data Governance: Alation - Must have, Informatica, Collibra, Ataccama - Good to have BI Platforms: PowerBI -Must have, Qlik, SSRS, SAS - Good to have Infrastructure as Code: Terraform, ARM, CloudFormation. Strong understanding of APIs, microservices, and event-driven architectures. Life at Sogeti: Sogeti supports all aspects of your well-being throughout the changing stages of your life and career. For eligible employees, we offer: Flexible work options 401(k) with 150% match up to 6% Employee Share Ownership Plan Medical, Prescription, Dental & Vision Insurance Life Insurance 100% Company-Paid Mobile Phone Plan 3 Weeks PTO + 7 Paid Holidays Paid Parental Leave Adoption, Surrogacy & Cryopreservation Assistance Subsidized Back-up Child/Elder Care & Tutoring Career Planning & Coaching $5,250 Tuition Reimbursement & 20,000+ Online Courses Employee Resource Groups Counseling & Support for Physical, Financial, Emotional & Spiritual Well-being Disaster Relief Programs About Sogeti Part of the Capgemini Group, Sogeti makes business value through technology for organizations that need to implement innovation at speed and want a local partner with global scale. With a hands-on culture and close proximity to its clients, Sogeti implements solutions that will help organizations work faster, better, and smarter. By combining its agility and speed of implementation through a DevOps approach, Sogeti delivers innovative solutions in quality engineering, cloud and application development, all driven by AI, data and automation. Become Your Best | ************* Disclaimer Capgemini is an Equal Opportunity Employer encouraging diversity in the workplace. All qualified applicants will receive consideration for employment without regard to race, national origin, gender identity/expression, age, religion, disability, sexual orientation, genetics, veteran status, marital status or any other characteristic protected by law. This is a general description of the Duties, Responsibilities and Qualifications required for this position. Physical, mental, sensory or environmental demands may be referenced in an attempt to communicate the manner in which this position traditionally is performed. Whenever necessary to provide individuals with disabilities an equal employment opportunity, Capgemini will consider reasonable accommodations that might involve varying job requirements and/or changing the way this job is performed, provided that such accommodations do not pose an undue hardship. Capgemini is committed to providing reasonable accommodation during our recruitment process. If you need assistance or accommodation, please reach out to your recruiting contact. Please be aware that Capgemini may capture your image (video or screenshot) during the interview process and that image may be used for verification, including during the hiring and onboarding process. Click the following link for more information on your rights as an Applicant ************************************************************************** Applicants for employment in the US must have valid work authorization that does not now and/or will not in the future require sponsorship of a visa for employment authorization in the US by Capgemini.
    $75k-112k yearly est. 4d ago
  • Sr. AI Application Developer

    Genesys 4.5company rating

    Data engineer job at Genesys

    Genesys empowers organizations of all sizes to improve loyalty and business outcomes by creating the best experiences for their customers and employees. Through Genesys Cloud, the AI-powered Experience Orchestration platform, organizations can accelerate growth by delivering empathetic, personalized experiences at scale to drive customer loyalty, workforce engagement, efficiency and operational improvements. We employ more than 6,000 people across the globe who embrace empathy and cultivate collaboration to succeed. And, while we offer great benefits and perks like larger tech companies, our employees have the independence to make a larger impact on the company and take ownership of their work. Join the team and create the future of customer experience together. Job Summary We are seeking a Senior AI Application Developer to design and deliver enterprise-grade AI applications that address complex business challenges across multiple domains. In this role, you will contribute to our AI Services platform by establishing best practices and reusable components, while focusing on secure, scalable, and compliant enterprise AI solutions. You will collaborate with data scientists, engineers, architects, and business stakeholders to translate cutting-edge AI capabilities into reliable production-ready applications. At Genesys, we are transforming the customer experience landscape with empathy, AI innovation, and global impact. Joining our team means becoming part of a global organization that is redefining how companies engage with their customers. About the Team The Data, Analytics & AI team is a central group bringing together expertise in Data Engineering, Platforms, Analytics, Science, AI, and Governance. We serve the entire enterprise, supporting business-critical systems such as Salesforce, Workday, and others, while partnering across functions including sales, finance, marketing, customer success, and product. This team plays a vital role in enabling data-driven decision making and AI-powered transformation across the organization. Key Responsibilities * Lead the design, development, and deployment of AI-driven applications for enterprise use cases such as customer engagement, workflow automation, and decision support. * Build applications that integrate seamlessly with the AI services platform to ensure reusability and alignment with enterprise standards. * Translate business requirements into scalable AI-enabled solutions, balancing innovation with operational excellence. * Contribute production-ready modules, APIs, and integration patterns to the shared AI services framework. * Partner with platform and architecture teams to ensure scalability, observability, multi-tenancy, and compliance. * Identify and resolve platform gaps surfaced through real-world application needs. * Ensure applications meet enterprise security, compliance, and governance standards, including data privacy, auditability, and responsible AI practices. * Implement robust logging, monitoring, and failover strategies to support high availability. * Advocate for responsible AI practices, including explainability, bias mitigation, and safe model use. * Collaborate with data scientists and ML engineers to operationalize models at scale. * Work closely with enterprise architects to align with IT and business strategies. * Mentor junior developers and provide technical leadership across application development projects. Required Qualifications * Bachelor's degree in Computer Science, Software Engineering, or related field (Master's preferred). * 5+ years of enterprise software development experience, including at least 2+ years working with AI/ML applications. * Strong proficiency in Python, Java, and/or JavaScript/TypeScript. * Proven experience deploying AI/ML models into production systems at enterprise scale. * Familiarity with cloud platforms such as AWS, Azure, or GCP, and container orchestration tools like Kubernetes and Docker. * Experience with distributed systems integration patterns including APIs, microservices, and messaging systems. Preferred Qualifications * Experience with LLM-based application frameworks such as LangChain, Semantic Kernel, or Haystack. * Knowledge of MLOps practices including CI/CD pipelines, model monitoring, and feature stores. * Hands-on experience with data governance, compliance frameworks, and enterprise security. * Familiarity with vector databases, embeddings, and retrieval-augmented generation (RAG). * Prior experience developing applications for multi-tenant, global-scale environments. Compensation: This role has a market-competitive salary with an anticipated base compensation range listed below. Actual salaries will vary depending on a candidate's experience, qualifications, skills, and location. This role might also be eligible for a commission or performance-based bonus opportunities. $129,800.00 - $241,200.00 Benefits: * Medical, Dental, and Vision Insurance. * Telehealth coverage * Flexible work schedules and work from home opportunities * Development and career growth opportunities * Open Time Off in addition to 10 paid holidays * 401(k) matching program * Adoption Assistance * Fertility treatments Click here to view a summary overview of our Benefits. If a Genesys employee referred you, please use the link they sent you to apply. About Genesys: Genesys empowers more than 8,000 organizations worldwide to create the best customer and employee experiences. With agentic AI at its core, Genesys Cloud is the AI-Powered Experience Orchestration platform that connects people, systems, data and AI across the enterprise. As a result, organizations can drive customer loyalty, growth and retention while increasing operational efficiency and teamwork across human and AI workforces. To learn more, visit **************** Reasonable Accommodations: If you require a reasonable accommodation to complete any part of the application process, or are limited in your ability to access or use this online application and need an alternative method for applying, you or someone you know may contact us at reasonable.accommodations@genesys.com. You can expect a response within 24-48 hours. To help us provide the best support, click the email link above to open a pre-filled message and complete the requested information before sending. If you have any questions, please include them in your email. This email is intended to support job seekers requesting accommodations. Messages unrelated to accommodation-such as application follow-ups or resume submissions-may not receive a response. Genesys is an equal opportunity employer committed to fairness in the workplace. We evaluate qualified applicants without regard to race, color, age, religion, sex, sexual orientation, gender identity or expression, marital status, domestic partner status, national origin, genetics, disability, military and veteran status, and other protected characteristics. Please note that recruiters will never ask for sensitive personal or financial information during the application phase.
    $84k-107k yearly est. Auto-Apply 11d ago

Learn more about Genesys jobs