Post job

Data Scientist jobs at P&G

- 431 jobs
  • Data Scientist

    Procter & Gamble 4.8company rating

    Data scientist job at P&G

    Do you enjoy solving billion-dollar data science problems across trillions of data points? Are you passionate about working at the cutting edge of interdisciplinary boundaries, where computer science meets hard science? If you like turning untidy data into nonobvious insights and surprising business leaders with the transformative power of Artificial Intelligence (AI), we want you on our team at P&G. As a Data Scientist in our organization, you will play a crucial role in disrupting current business practices by designing and implementing innovative models that enhance our processes. You will be expected to constructively research, design, and customize algorithms tailored to various problems and data types. Utilizing your expertise in Operations Research (including optimization and simulation) and machine learning models (such as tree models, deep learning, and reinforcement learning), you will directly contribute to the development of scalable Data Science algorithms and collaborate with Data and Software Engineering teams to productionize these solutions. Your technical knowledge will empower you to apply exploratory data analysis, feature engineering, and model building on massive datasets, delivering accurate and impactful insights. Additionally, you will mentor others as a technical coach and become a recognized expert in one or more Data Science techniques, quantifying the improvements in business outcomes resulting from your work. Key Responsibilities: + Algorithm Design & Development: Directly contribute to the design and development of scalable Data Science algorithms. + Collaboration: Work closely with Data and Software Engineering teams to effectively productionize algorithms. + Data Analysis: Apply thorough technical knowledge to large datasets, conducting exploratory data analysis, feature engineering, and model building. + Coaching & Mentorship: Develop others as a technical coach, sharing your expertise and insights. + Expertise Development: Become a known expert in one or multiple Data Science techniques and methodologies. Job Qualifications Required Qualifications: + Education: Pursuing or has graduated with a Master's degree in a quantitative field (Operations Research, Computer Science, Engineering, Applied Mathematics, Statistics, Physics, Analytics, etc.) or possess equivalent work experience. + Technical Skills: Proficient in programming languages such as Python and familiar with data science/machine learning libraries like OpenCV, scikit-learn, PyTorch, TensorFlow/Keras, and Pandas. + Communication: Strong written and verbal communication skills, with the ability to influence others to take action. Preferred Qualifications: + Analytic Methodologies: Experience applying analytic methodologies such as Machine Learning, Optimization, and Simulation to real-world problems. + Continuous Learning: A commitment to lifelong learning, keeping up to date with the latest technology trends, and a willingness to teach others while learning new techniques. + Data Handling: Experience with large datasets and cloud computing platforms such as GCP or Azure. + DevOps Familiarity: Familiarity with DevOps environments, including tools like Git and CI/CD practices. Compensation for roles at P&G varies depending on a wide array of non-discriminatory factors including but not limited to the specific office location, role, degree/credentials, relevant skill set, and level of relevant experience. At P&G compensation decisions are dependent on the facts and circumstances of each case. Total rewards at P&G include salary + bonus (if applicable) + benefits . Your recruiter may be able to share more about our total rewards offerings and the specific salary range for the relevant location(s) during the hiring process. We are committed to providing equal opportunities in employment. We value diversity and do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. Immigration Sponsorship is not available for this role. For more information regarding who is eligible for hire at P&G along with other work authorization FAQ's, please click HERE (******************************************************* . Procter & Gamble participates in e-verify as required by law. Qualified individuals will not be disadvantaged based on being unemployed. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. Job Schedule Full time Job Number R000135859 Job Segmentation Entry Level Starting Pay / Salary Range $85,000.00 - $115,000.00 / year
    $85k-115k yearly 60d+ ago
  • Cincinnati State Co-Op R&D

    P&G 4.8company rating

    Data scientist job at P&G

    Associate Researcher Co-Op Program- Cincinnati State Students ONLY P&G is the largest consumer packaged goods company in the world. We have operations in over 75 countries, with 65 trusted brands that improve lives for 5 billion consumers worldwide. This brings many advantages, including the opportunity for our employees to enjoy a diverse and rewarding lifelong career filled with new and exciting challenges. We believe phenomenal ideas emerge from the creative connections that happen between our talented employees, and we encourage diverse, multi-functional teams to work together to generate new insights to address challenges we face. The Opportunity Do you thrive in a dynamic environment? Are you ready to put the knowledge and skills that you learned in school to use? We're looking for phenomenal teammates who have these qualities and want to make a difference for consumers. Our paid co-op positions are pre-entry level and offer an opportunity for you to learn the office & lab environment while balancing projects with management support needs. These roles are non-management positions with exposure to tasks related to larger projects. As a co-op, you will become exposed to what a non-management career at P&G offers. You will report to a supervisor in the area of work for training and mentorship. The co-op program offers you a range of hands-on training on the practicalities of lab-based work as well as culture and work norms. This is a paid position and depending on satisfactory completion of certain criteria, you may be considered for a full-time position upon graduation. We are looking for individuals who are passionate about hands-on experimentation and basic science. Onboards to P&G systems and performs the basic and critical experimental work of day-to-day applied research. Work is predominantly execution/procedural oriented - in a lab, in a plant or pilot plant (internal or external), at a clinical site, with consumers, and/or on a computer. The Ideal Candidate Must be enrolled in a local Associate's Degree program in a Science field (We prefer Engineering, Biology or Chemistry, although other similar majors will be considered). All class standings/credits hours are eligible. Have a GPA in good academic standings Committed to working at least one session, which are in line with your semester. You would need to be still enrolled in classes. Timing of the assignments will be based on business need but would likely be around 16 weeks. Work 40 hours a week Able to commute to work in the Greater Cincinnati area or willing to relocate at your own expense Minimum work duration of 12 weeks but no more than 24 weeks in a 12-month period. Job Qualifications Education: Working towards an Associate's degree. If you're a really good fit, you'll have: The capacity to set priorities and work independently Strong level of attention to detail Experience in word processing, spreadsheet, and presentations applications Clear written and verbal skills to document experiment in lab notebook and discuss and observations Strong communication skills The ability to learn on the job in a dynamic environment Experience in a biology, chemistry, or social science lab Just So You Know: At P&G, Intern/Co-Op sessions are considered temporary employment, with a predicted ending point. No full-time employment commitments are made. However, depending on satisfactory completion of certain criteria, you may be considered for a full-time position upon graduation. Relocation is not offered for this position. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. Compensation for roles at P&G varies depending on a wide array of non-discriminatory factors including but not limited to the specific office location, role, degree/credentials, relevant skill set, and level of relevant experience. At P&G compensation decisions are dependent on the facts and circumstances of each case. Total rewards at P&G include salary + bonus (if applicable) + benefits. Your recruiter may be able to share more about our total rewards offerings and the specific salary range for the relevant location(s) during the hiring process. We are committed to providing equal opportunities in employment. We value diversity and do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. Immigration Sponsorship is not available for this role. For more information regarding who is eligible for hire at P&G along with other work authorization FAQ's, please click HERE. Procter & Gamble participates in e-verify as required by law. Qualified individuals will not be disadvantaged based on being unemployed. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. Job Schedule Full time Job Number R000136366 Job Segmentation Internships Starting Pay / Salary Range $20.50 - $23.50 / hour
    $20.5-23.5 hourly Auto-Apply 60d+ ago
  • Data Scientist

    Surya Systems, Inc. 3.6company rating

    San Francisco, CA jobs

    Title: Data Scientist Duration: Long Term contract Remote • A minimum of 5+ years of industry experience in business analytics with an emphasis on Trust & safety product and operations scalability; and on developing business facing metrics and solving future business decisions. Have experience on a trust and safety team and/or have worked closely with policy, content moderation, or security teams • Expert skills in SQL and expert in at least one programming language for data analysis (Python or R) •Experience with non-experimental causal inference methods, experimentation and machine learning techniques, ideally in a multi-sided platform setting •Experience in schema design and high-dimensional data modeling (ETL framework like Airflow) •Demonstrated ability to create and drive product roadmaps for the business, then seamlessly execute against them • Proven ability to tailor your recommended solutions to business problems to a cross functional team, with the ability to communicate clearly and effectively with cross functional partners of varying technical levels • Ability to work under conditions of ambiguity in a fast-growth, sometimes uncertain and complex environment - comfortable acting with minimal planning, direction, and supervision. You can identify issues both within and outside of your immediate scope, and propose solutions. • Absolute ability to maintain confidentiality and objectivity Thanks & Regards ... G Naveen Kumar Email : ********************* Surya Systems, Inc
    $114k-153k yearly est. 3d ago
  • Lead Data Scientist

    TPI Global Solutions 4.6company rating

    Alhambra, CA jobs

    Role: Principal Data Scientist Duration: 12+ Months contract The Principal Data Scientist works to establish a comprehensive Data Science Program to advance data-driven decision-making, streamline operations, and fully leverage modern platforms including Databricks, or similar, to meet increasing demand for predictive analytics and AI solutions. The Principal Data Scientist will guide program development, provide training and mentorship to junior members of the team, accelerate adoption of advanced analytics, and build internal capacity through structured mentorship. The Principal Data Scientist will possess exceptional communication abilities, both verbal and written, with a strong customer service mindset and the ability to translate complex concepts into clear, actionable insights; strong analytical and business acumen, including foundational experience with regression, association analysis, outlier detection, and core data analysis principles; working knowledge of database design and organization, with the ability to partner effectively with Data Management and Data Engineering teams; outstanding time management and organizational skills, with demonstrated success managing multiple priorities and deliverables in parallel; a highly collaborative work style, coupled with the ability to operate independently, maintain focus, and drive projects forward with minimal oversight; a meticulous approach to quality, ensuring accuracy, reliability, and consistency in all deliverables; and proven mentorship capabilities, including the ability to guide, coach, and upskill junior data scientists and analysts. Required Experience • Five (5)+ years of professional experience leading data science initiatives, including developing machine learning models, statistical analyses, and end-to-end data science workflows in production environments. • Three (3)+ years of experience working with Databricks and similar cloud-based analytics platforms, including notebook development, feature engineering, ML model training, and workflow orchestration. • Three (3)+ years of experience applying advanced analytics and predictive modeling (e.g., regression, classification, clustering, forecasting, natural language processing). • Two (2)+ years of experience implementing MLOps practices, such as model versioning, CI/CD for ML, MLflow, automated pipelines, and model performance monitoring. • Two (2)+ years of experience collaborating with data engineering teams to design data pipelines, optimize data transformations, and implement Lakehouse or data warehouse architectures (e.g., Databricks, Snowflake, SQL-based platforms). • Two (2)+ years of experience mentoring or supervising junior data scientists or analysts, including code reviews, training, and structured skill development. • Two (2)+ years of experience with Python and SQL programming, using data sources such as SQL Server, Oracle, PostgreSQL, or similar relational databases. • One (1)+ year of experience operationalizing analytics within enterprise governance frameworks, partnering with Data Management, Security, and IT to ensure compliance, reproducibility, and best practices. Education This classification requires possession of a Master's degree or higher in Data Science, Statistics, Computer Science, or a closely related field. Additional qualifying professional experience may be substituted for the required education on a year-for-year basis. At least one of the following industry-recognized certifications in data science or cloud analytics, such as: • Microsoft Azure Data Scientist Associate (DP-100) • Databricks Certified Data Scientist or Machine Learning Professional • AWS Machine Learning Specialty • Google Professional Data Engineer • or equivalent advanced analytics certifications. The certification is required and may not be substituted with additional experience. Additional Information • California Resident Candidates Only. This position is HYBRID (2 days onsite, 2 days telework). Interviews will be conducted via Microsoft Teams. The work schedule follows a 4/40 (10-hour days, Monday-Thursday), with the specific shift determined by the program manager. Shifts may range between 7:15 a.m. and 6:00 p.m.
    $90k-125k yearly est. 4d ago
  • Data Analyst II

    Allagash Brewing Company 4.3company rating

    Portland, ME jobs

    Do you feel energized when you build tools that make work smoother, decisions clearer, and the story behind the numbers come to life? Are you someone who finds satisfaction in thoughtful analysis, well-designed dashboards, and models that support meaningful progress? If this resonates with you, we invite you to consider joining our Finance team as a Data Analyst II- a role where your technical strengths and business insight will help shape how we plan, measure, and grow. Allagash Brewing Company in Portland, Maine is hiring a mid-level Data Analyst (2+ years of relevant experience and bachelor's degree in related field required) to help strengthen and expand our analytics ecosystem. In this role, you'll write SQL queries and develop data models to support robust reporting, create and refine Power BI dashboards for financial, sales, and operation insights, and conduct statistical analysis, forecasting, and predictive modeling using Python or R where appropriate. Your work will ensure that every team has reliable, accurate, and actionable insights when they need them. We're looking for someone with strong SQL skills, solid business acumen, experience building dashboards, and a comfort level explaining analysis to both technical and non-technical audiences. Familiarity with Python or R for statistical analysis and automating workflows is highly valued. Experience in brewing, food/beverage, manufacturing, or CPG is a plus. This is a full-time role, Monday through Friday, during standard business hours. The position is based in our Portland office and will be 100% on-site during the initial onboarding period. After six months, and with strong performance, you may be eligible to work a hybrid schedule, with an expected on-site presence of at least 80%. We are proud to offer strong wages and a thoughtful benefits package, including 100% paid premiums for employee health, dental, life, and disability benefits; generous paid time off from day one; paid volunteer time; continuing education reimbursement; an onsite fitness center; and a 401(k) with employer match up to 4%. Employees have access to free bus passes, on-site parking, covered bike racks, locker rooms, and showers. We value a diverse workforce and encourage applications from people of all backgrounds, including those from historically underrepresented communities in craft beer. Allagash is an equal opportunity employer, and this position is open to all qualified candidates.
    $53k-75k yearly est. 2d ago
  • Senior Data Engineer

    Leon Recruitment 4.2company rating

    Miami, FL jobs

    Sr. Data Engineer CLIENT: Fortune 150 Company; Financial Services SUMMARY DESCRIPTION: The Data Engineer will serve in a strategic role designing and managing the infrastructure that supports data storage, transforming, processing, and retrieval enabling efficient data analysis and decision-making within the organization. This position is critical as part of the Database and Analytics team responsible for design, development, and implementation of complex enterprise-level data integration and consumption solutions. It requires a highly technical, self-motivated senior engineer who will work with analysts, architects, and systems engineers to develop solutions based on functional and technical specifications that meet quality and performance requirements. Must have Experience with Microsoft Fabric. PRIMARY DUTIES AND RESPONSIBILITIES: Utilize experience in ETL tools, with at least 5 years dedicated to Azure Data Factory (ADF), to design, code, implement, and manage multiple parallel data pipelines. Experience with Microsoft Fabric, Pipelines, Mirroring, and Data Flows Gen 2 usage is required. Apply a deep understanding of data warehousing concepts, including data modeling techniques like star and snowflake schemas, SCD Type 2, Change Data Feeds, Change Data Capture. Also demonstrates hands-on experience with Data Lake Gen 2, Delta Lake, Delta Parquet files, JSON files, big data storage layers, optimize and maintain big data storage using Partitioning, V-Order, Optimize, Vacuum and other techniques. Design and optimize medallion data models, warehouses, architectures, schemas, indexing, and partitioning strategies. Collaborate with Business Insights and Analytics teams to understand data requirements and optimize storage for analytical queries. Modernize databases and data warehouses and prepare them for analysis, managing for optimal performance. Design, build, manage, and optimize enterprise data pipelines ensuring efficient data flow, data integrity, and data quality throughout the process. Automate efficient data acquisition, transformation, and integration from a variety of data sources including databases, APIs, message queues, data streams, etc. Competently performs advanced data tasks with minimal supervision, including architecting advanced data solutions, leading and coaching others, and effectively partnering with stakeholders. Interface with other technical and non-technical departments and outside vendors on assigned projects. Under the direction of the IT Management, will establish standards, policies and procedures pertaining to data governance, database/data warehouse management, metadata management, security, optimization, and utilization. Ensure data security and privacy by implementing access controls, encryption, and anonymization techniques as per data governance and compliance policies. Expertise in managing schema drift within ETL processes, ensuring robust and adaptable data integration solutions. Document data pipelines, processes, and architectural designs for future reference and knowledge sharing. Stay informed of latest trends and technologies in the data engineering field, and evaluate and adopt new tools, frameworks, and platforms (like Microsoft Fabric) to enhance data processing and storage capabilities. When necessary, implement and document schema modifications made to legacy production environment. Perform any other function required by IT Management for the successful operation of all IT and data services provided to our clients. Available nights and weekends as needed for system changes and rollouts. EDUCATION AND EXPERIENCE REQUIREMENTS: Bachelor's or Master's degree in computer science, information systems, applied mathematics, or closely related field. Minimum of ten (10) years full time employment experience as a data engineer, data architect, or equivalent required. Must have Experience with Microsoft Fabric SKILLS: Experience in working with large, heterogeneous datasets in building and optimizing data pipelines, pipeline architectures, and integrated datasets using traditional and modern data integration technologies (such as ETL, ELT, MPP, data replication, change data captures, message-oriented data movement, API design, stream data integration and data virtualization) Experience working with cloud data engineering stacks (specifically Azure and Microsoft Fabric), Data Lake, Synapse, Azure Data Factory, Databricks, Informatica, Data Explorer, etc. Strong, in-depth understanding of database architecture, storage, and administration utilizing Azure stack. Deep understanding of Data architectural approaches, Data Engineering Solutions, Software Engineering principles and best practices. Working knowledge and experience with modern BI and ETL tools (Power BI, Power Automate, ADF, SSIS, etc.) Experience utilizing data storage solutions including Azure Blob storage, ADLS Gen 2. Solid understanding of relational and dimensional database principles and best practices in a client/server, thin-client, and cloud computing environment. Advanced working knowledge of TSQL and SQL Server, transactions, error handling, security and maintenance with experience writing complex stored procedures, views, and user-defined functions as well as complex functions, dynamic SQL, partitions, CDC, CDF, etc. Experience with .net scripting and understanding of API integration in a service-oriented architecture. Knowledge of reporting tools, query language, semantic models with specific experience with Power BI. Understanding of and experience with agile methodology. PowerShell scripting experience desired. Experience with Service Bus, Azure Functions, Event Grids, Event Hubs, Kafka would be beneficial. Experience working in Agile methodology. Working Conditions: Available to work evenings and/or weekends (as required). Workdays and hours are Monday through Friday 8:30 am to 5:30 pm ET.
    $79k-106k yearly est. 2d ago
  • Sr Electronic Data Interchange Coordinator

    Ashley Furniture Industries 4.1company rating

    Tampa, FL jobs

    On-Site: Locations - Tampa FL, Arcadia WI (GC/USC Only) Senior EDI Coordinator Senior EDI Coordinators create new and update existing EDI maps to support the movement of thousands of transactions each day, setup and maintain EDI trading partners, setup and maintain EDI communication configurations, and provide support for a large assortment of EDI transactions with variety of trading partners. Primary Job Functions: Monitor inbound and outbound transaction processing to ensure successful delivery. Take corrective action on those transactions that are not successful. Develop and modify EDI translation maps according to Business Requirements Documents and EDI Specifications. Perform unit testing and coordinate integrated testing with internal and external parties. Perform map reviews to ensure new maps and map changes comply with requirements and standards. Prepare, maintain, and review documentation. This includes Mapping Documents, Standard Operating Procedures, and System Documentation. Perform Trading Partner setup, configuration, and administrative activities. Analyze and troubleshoot connectivity, mapping, and data issues. Provide support to our business partners and external parties. Participate in an after-hours on-call rotation. Setup and maintain EDI communication channels. Provide coaching and mentoring to EDI Coordinators. Suggest EDI best practices and opportunities for improvement. Maintain and update AS2 Certificates. Deploy map changes to production. Perform EDI system maintenance and upgrades. Job Qualifications: Education: Bachelor's Degree in Information Systems, Computer Science, or other related fields; or equivalent combination of education and experience, Required Experience: 5+ years of practical EDI mapping experience, with emphasis in ANSI X.12, Required Experience working with XML and JSON transactions, Preferred Experience working with AS2, VAN, and sFTP communications, Preferred Experience working with AS2 Certificates, Preferred Experience with Azure DevOps Agile/Scrum platform, Preferred Experience in large, complex enterprise environments, Preferred Knowledge, Skills and Abilities: Advanced analytical and problem-solving skills Strong attention to detail Excellent written and verbal communication skills Excellent client facing and interpersonal skills Effective time management and organizational skills Work independently as well as in a team environment Handle multiple projects simultaneously within established time constraints Perform under strong demands in a fast-paced environment Display empathy, understanding and patience with employees and external customers Respond professionally in situations with difficult employee/vendor/customer issues or inquiries Working knowledge of Continuous Improvement methodologies Strong working knowledge of Microsoft Office Suite
    $56k-76k yearly est. 3d ago
  • Data Engineer

    Cliff Services Inc. 4.8company rating

    Charlotte, NC jobs

    Job Title: Data Engineer / SQL Server Developer (7+ Years) Client: Wells Fargo Rate: $60/hr Interview Process: Code Test + In-Person Interview Job Description Wells Fargo is seeking a Senior Data Engineer / SQL Server Developer (7+ years) who can work across both database development and QA automation functions. The ideal candidate will have strong SQL Server expertise along with hands-on experience in test automation tools. Key Responsibilities Design, develop, and optimize SQL Server database structures, queries, stored procedures, triggers, and ETL workflows. Perform advanced performance tuning, query optimization, and troubleshooting of complex SQL issues. Develop and maintain data pipelines ensuring data reliability, integrity, and high performance. Build and execute automated test scripts using Selenium, BlazeMeter, or similar frameworks. Perform both functional and performance testing across applications and data processes. Support deployments in containerized ecosystems, ideally within OpenShift. Collaborate with architecture, QA, DevOps, and application teams to ensure seamless delivery. Required Skills Primary: 7+ years of hands-on SQL Server development (T-SQL, stored procedures, performance tuning, ETL). Secondary: Experience working with OpenShift or other container platforms. Testing / QA Automation: Strong experience with test automation tools like Selenium, BlazeMeter, JMeter, or equivalent. Ability to design automated functional and performance test suites. Ideal Candidate Profile Senior-level developer capable of taking ownership of both development and test automation deliverables. Strong analytical and debugging skills across data engineering and testing disciplines. Experience working in large-scale enterprise environments.
    $60 hourly 1d ago
  • Senior Data Engineer

    Leon Recruitment 4.2company rating

    Saint Petersburg, FL jobs

    Sr. Data Engineer CLIENT: Fortune 150 Company; Financial Services SUMMARY DESCRIPTION: The Data Engineer will serve in a strategic role designing and managing the infrastructure that supports data storage, transforming, processing, and retrieval enabling efficient data analysis and decision-making within the organization. This position is critical as part of the Database and Analytics team responsible for design, development, and implementation of complex enterprise-level data integration and consumption solutions. It requires a highly technical, self-motivated senior engineer who will work with analysts, architects, and systems engineers to develop solutions based on functional and technical specifications that meet quality and performance requirements. Must have Experience with Microsoft Fabric. PRIMARY DUTIES AND RESPONSIBILITIES: Utilize experience in ETL tools, with at least 5 years dedicated to Azure Data Factory (ADF), to design, code, implement, and manage multiple parallel data pipelines. Experience with Microsoft Fabric, Pipelines, Mirroring, and Data Flows Gen 2 usage is required. Apply a deep understanding of data warehousing concepts, including data modeling techniques like star and snowflake schemas, SCD Type 2, Change Data Feeds, Change Data Capture. Also demonstrates hands-on experience with Data Lake Gen 2, Delta Lake, Delta Parquet files, JSON files, big data storage layers, optimize and maintain big data storage using Partitioning, V-Order, Optimize, Vacuum and other techniques. Design and optimize medallion data models, warehouses, architectures, schemas, indexing, and partitioning strategies. Collaborate with Business Insights and Analytics teams to understand data requirements and optimize storage for analytical queries. Modernize databases and data warehouses and prepare them for analysis, managing for optimal performance. Design, build, manage, and optimize enterprise data pipelines ensuring efficient data flow, data integrity, and data quality throughout the process. Automate efficient data acquisition, transformation, and integration from a variety of data sources including databases, APIs, message queues, data streams, etc. Competently performs advanced data tasks with minimal supervision, including architecting advanced data solutions, leading and coaching others, and effectively partnering with stakeholders. Interface with other technical and non-technical departments and outside vendors on assigned projects. Under the direction of the IT Management, will establish standards, policies and procedures pertaining to data governance, database/data warehouse management, metadata management, security, optimization, and utilization. Ensure data security and privacy by implementing access controls, encryption, and anonymization techniques as per data governance and compliance policies. Expertise in managing schema drift within ETL processes, ensuring robust and adaptable data integration solutions. Document data pipelines, processes, and architectural designs for future reference and knowledge sharing. Stay informed of latest trends and technologies in the data engineering field, and evaluate and adopt new tools, frameworks, and platforms (like Microsoft Fabric) to enhance data processing and storage capabilities. When necessary, implement and document schema modifications made to legacy production environment. Perform any other function required by IT Management for the successful operation of all IT and data services provided to our clients. Available nights and weekends as needed for system changes and rollouts. EDUCATION AND EXPERIENCE REQUIREMENTS: Bachelor's or Master's degree in computer science, information systems, applied mathematics, or closely related field. Minimum of ten (10) years full time employment experience as a data engineer, data architect, or equivalent required. Must have Experience with Microsoft Fabric SKILLS: Experience in working with large, heterogeneous datasets in building and optimizing data pipelines, pipeline architectures, and integrated datasets using traditional and modern data integration technologies (such as ETL, ELT, MPP, data replication, change data captures, message-oriented data movement, API design, stream data integration and data virtualization) Experience working with cloud data engineering stacks (specifically Azure and Microsoft Fabric), Data Lake, Synapse, Azure Data Factory, Databricks, Informatica, Data Explorer, etc. Strong, in-depth understanding of database architecture, storage, and administration utilizing Azure stack. Deep understanding of Data architectural approaches, Data Engineering Solutions, Software Engineering principles and best practices. Working knowledge and experience with modern BI and ETL tools (Power BI, Power Automate, ADF, SSIS, etc.) Experience utilizing data storage solutions including Azure Blob storage, ADLS Gen 2. Solid understanding of relational and dimensional database principles and best practices in a client/server, thin-client, and cloud computing environment. Advanced working knowledge of TSQL and SQL Server, transactions, error handling, security and maintenance with experience writing complex stored procedures, views, and user-defined functions as well as complex functions, dynamic SQL, partitions, CDC, CDF, etc. Experience with .net scripting and understanding of API integration in a service-oriented architecture. Knowledge of reporting tools, query language, semantic models with specific experience with Power BI. Understanding of and experience with agile methodology. PowerShell scripting experience desired. Experience with Service Bus, Azure Functions, Event Grids, Event Hubs, Kafka would be beneficial. Experience working in Agile methodology. Working Conditions: Available to work evenings and/or weekends (as required). Workdays and hours are Monday through Friday 8:30 am to 5:30 pm ET.
    $80k-107k yearly est. 2d ago
  • Data Engineer

    Cliff Services Inc. 4.8company rating

    Chicago, IL jobs

    We are seeking a highly skilled Data Engineer with strong expertise in Scala, AWS, and Apache Spark. The ideal candidate will have 7+ years of hands-on experience building scalable data pipelines, distributed processing systems, and cloud-native data solutions. Key Responsibilities Design, build, and optimize large-scale data pipelines using Scala and Spark. Develop and maintain ETL/ELT workflows across AWS services. Work on distributed data processing using Spark, Hadoop, or similar. Build data ingestion, transformation, cleansing, and validation routines. Optimize pipeline performance and ensure reliability in production environments. Collaborate with cross-functional teams to understand requirements and deliver robust solutions. Implement CI/CD best practices, testing, and version control. Troubleshoot and resolve issues in complex data flow systems. Required Skills & Experience 7+ years of Data Engineering experience. Strong programming experience with Scala (must-have). Hands-on experience with Apache Spark (core, SQL, streaming). Solid experience with AWS cloud services (Glue, EMR, Lambda, S3, EC2, IAM, etc.). High proficiency in SQL and relational/no SQL data stores. Strong understanding of data modeling, data architecture, and distributed systems. Experience with workflow orchestration tools (Airflow, Step Functions, etc.). Strong communication and problem-solving skills. Preferred Skills Experience with Kafka, Kinesis, or other streaming platforms. Knowledge of containerization tools like Docker or Kubernetes. Background in data warehousing or modern data lake architectures.
    $91k-120k yearly est. 1d ago
  • Senior Data Engineer

    Fortune 4.0company rating

    Durham, NC jobs

    We are seeking an experienced Senior Big Data & Cloud Engineer to design, build, and deliver advanced API and data solutions that support financial goal planning, investment insights, and projection tools. This role is ideal for a seasoned engineer with 10+ years of hands-on experience in big data processing, distributed systems, cloud-native development, and end-to-end data pipeline engineering. You will work across retail, clearing, and custody platforms, leveraging modern cloud and big data technologies to solve complex engineering challenges. The role involves driving technology strategy, optimizing large-scale data systems, and collaborating across multiple engineering teams. Key Responsibilities Design and develop large-scale data movement services using Apache Spark (EMR) or Spring Batch. Build and maintain ETL workflows, distributed pipelines, and automated batch processes. Develop high-quality applications using Java, Scala, REST, and SOAP integrations. Implement cloud-native solutions leveraging AWS S3, EMR, EC2, Lambda, Step Functions, and related services. Work with modern storage formats and NoSQL databases to support high-volume workloads. Contribute to architectural discussions and code reviews across engineering teams. Drive innovation by identifying and implementing modern data engineering techniques. Maintain strong development practices across the full SDLC. Design and support multi-region disaster recovery (DR) strategies. Monitor, troubleshoot, and optimize distributed systems using advanced observability tools. Required Skills : 10+ years of experience in software/data engineering with strong big data expertise. Proven ability to design and optimize distributed systems handling large datasets. Strong communicator who collaborates effectively across teams. Ability to drive architectural improvements and influence engineering practices. Customer-focused mindset with commitment to delivering high-quality solutions. Adaptable, innovative, and passionate about modern data engineering trends.
    $80k-105k yearly est. 2d ago
  • Temporary Data Analyst (30-40 hours/week, 3-month assignment)

    Napco Media 4.9company rating

    Philadelphia, PA jobs

    NAPCO Media (*************** a subsidiary of PRINTING United Alliance (*************************** is a fast-paced B2B media organization serving the printing, retail, and nonprofit industries. We specialize in the creation and cross-channel distribution of exceptional content on print and digital platforms such as newsletters, magazines, podcasts, social media, and events. Our mission is to build community between the audiences and clients we serve. Role Summary We are seeking a technical, production-focused Data Analyst to cover a 3-month leave. This role requires someone who can immediately take on survey programming, data cleaning, cross-tabulation, and chart creation with minimal ramp-up. This is not a general market research position - candidates must have hands-on experience with the specific tools and workflows listed below. Core Responsibilities Program surveys in SurveyMonkey, including advanced logic, piping, randomization, and QA. Manage collectors, fielding, troubleshooting, and survey flow validation. Clean and structure raw survey data in Excel (remove bad responses, combine datasets, build clean tables). Create segmented databooks (cross-tabs, banner tables) based on internal specifications. Build PowerPoint chart decks using provided templates and brand formatting. Perform QA on surveys, datasets, and charts to ensure accuracy and consistency. Work closely with the research team to deliver accurate, on-time backend outputs. Required Skills Strong, proven experience with SurveyMonkey programming (not just taking surveys - full setup and logic). Advanced Excel skills for cleaning, organizing, and segmenting data. Experience producing cross-tabs and analyzing survey-based datasets. Strong PowerPoint skills, especially charts and visual formatting. High attention to detail, independence, and reliability. Preferred Experience Prior work in research operations, data processing, or survey analytics. Experience with B2B or market research studies. Familiarity with external survey panels (helpful but not required). Experience with Q software (helpful but not required). Assignment Details Schedule: 30-40 hours/week Duration: 3 months Location: Remote Start: ASAP Focus: Pure production work (no client communication or project management) Email resume to ************. We strive to develop and retain a diverse, equitable, and inclusive workplace where all employees feel they are respected, treated fairly and given equal opportunity to excel in their careers. NAPCO Media is committed to providing equal opportunity for employees and applicants in all aspects of the employment relationship, without regard to race, religion, color, age, gender (including pregnancy, childbirth, or related medical conditions), marital status, parental status, sexual orientation, gender identity, gender expression, ancestry, national origin, citizenship, political affiliation, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics. All employment decisions are decided on the basis of qualifications, merit, and business needs.
    $51k-71k yearly est. 1d ago
  • Data Modeler II

    TPI Global Solutions 4.6company rating

    Houston, TX jobs

    SCM Enable & Innovate is seeking a Data Modeler II with a product-driven, start-up mindset to support end-to-end development of innovative data solutions that drive measurable business value across Supply Chain operations. This role will work in a hybrid model based in Houston, TX and requires strong expertise in data science, analytics, ETL development, and product execution within the oil & gas industry. Responsibilities Product Development Develop innovative data science solutions leveraging deep knowledge of the oil & gas industry and Supply Chain processes. Design and optimize ETL pipelines for scalable, high-performance data processing using Azure Databricks, Microsoft Dataflow, Dataverse, and/or Oracle Autonomous Data Warehouse (ADW). Work with various ERP systems, including SAP and Oracle, along with their analytics tools to enable data-driven decisions. Integrate solutions with enterprise data platforms and visualization tools for reporting. Build and maintain master datasets to support procurement, spend analytics, and market intelligence. Ensure adherence to data governance, security, and company policies across all development efforts. Program Management Define and document project objectives, problem statements, business value, business processes, and requirements. Manage project timelines, resources, and cross-functional coordination to ensure alignment with project goals. Facilitate project meetings and communicate progress updates to stakeholders. Maintain all project documentation, including idea assessments, requirements, design documents, and weekly status reports. Communicate technical details, requirements, and design elements to internal business stakeholders in clear, understandable language. Skills & Qualifications Experience 5-7 years of relevant experience in Supply Chain Management (SCM) or product development. Technical Expertise Advanced proficiency in data science, including statistical analysis, predictive analytics, text-based sentiment analysis, machine learning, and data visualization. Strong experience working with unstructured data, such as PO line descriptions and customer feedback. Advanced proficiency in Python and PySpark with Databricks for large-scale data processing. Hands-on experience with prompt engineering. Experience developing solutions using the Microsoft Power Platform (Copilot Studio, Dataflow, Dataverse, Power Automate, Model-Driven Apps). In-depth understanding of SAP and Oracle Cloud modules (SCM, A/P, Projects, Finance). Proficiency in SQL for data transformation (preferred). Experience using Alteryx for data preparation and workflow automation (preferred). Industry Knowledge Strong understanding of the oil & gas sector, including SCM Procure-to-Pay / Source-to-Pay processes and associated value drivers. Project Management Skilled in agile/waterfall methodologies. Strong stakeholder engagement and communication skills. Proficiency in documentation tools including PowerPoint, Excel, and Visio.
    $92k-119k yearly est. 1d ago
  • Data Modeler II

    TPI Global Solutions 4.6company rating

    Houston, TX jobs

    Job Title: Data Modeler II Contract: 12+ Months with Possible Extension Department: SCM Enablement & Innovation - SCM team enabling and innovate - enable to innovate key misconception is they are from the business, they are not an IT group, and they are managing innovations on the business side. This role will involve creating solutions, will not be a power BI development role. Presenting and creating solutions for client - we as a team attempt to solve enterprise solutions with business teams and develop some of those solutions for small and medium problems for low code solutions core ERP system client is using is Oracle Cloud Responsibilities: Need someone who has a depth of AI understanding - have a good understanding of python in terms of data transformations, project management, understanding requirements, and gathering data Must do user acceptance testing and deployment Ability to oversee projects they are working on and plan ahead without needing that guidance/instruction - ability to be proactive Providing end to end solutions Gathering data from ERP system, putting into data verse, creating prompts, etc. Gathering pain points from customers and need to discuss with team on how to solve that problem Reviewing anything else outside of ERP to massage and present to customers/stakeholders and provide solutions to their problems Data bricks (good knowledge needed), data verse and API connection between ERP and Oracle Platforms Using power automate/power apps Translate business to technical terms and vice versa Skills: Need someone who has a depth of AI understanding - have a good understanding of python in terms of data transformations, project management, understanding requirements, and gathering data Knowledge of implementing solutions using data bricks would be valuable Should have good understanding end to end user enterprise and how the user should interact with local platforms Good understanding of Oracle Cloud 50/50 technical/business oriented - DOES NOT want someone super technical who cannot speak on behalf of the business Some knowledge in supply chain OR oil & gas industry needed Experience cleaning data is a huge plus
    $92k-119k yearly est. 1d ago
  • Actuarial Analyst

    Pacer Group 4.5company rating

    San Antonio, TX jobs

    Actuary Analyst Contract: 6 Month Domain: Actuary - P&C (Property & Casualty) About the Role W are seeking a highly skilled Actuary Analyst to support state filing processes, rate analysis, and actuarial modeling within the P&C domain. This role requires strong analytical capability, actuarial expertise, and the ability to work independently on complex, unstructured projects. Key Responsibilities Analyze assigned state data and manage the end-to-end state filing process, including developing/revising rates, performing trend analysis, and preparing documentation for the Department of Insurance (DOI). Work closely with the Lead Actuary to obtain sign-offs and coordinate responses to any DOI objections. Independently apply technical and actuarial methodologies to complete advanced actuarial projects with minimal guidance. Select, validate, and analyze data required for trend analysis to support accurate rate level indications. Translate business needs into technical requirements and run actuarial/statistical models. Interpret, summarize, and communicate results clearly to support business decisions while maintaining a strong control environment. Utilize actuarial, mathematical, or statistical techniques to enhance actuarial work product. Apply understanding of insurance products, market trends, and stakeholder needs to solve unstructured business problems. Ensure all business activities comply with risk, regulatory, and compliance standards. Qualifications Experience in Actuarial Science, P&C Insurance, Statistical Modeling, or Rate Filing. Strong analytical and problem-solving skills. Ability to work independently on advanced projects. Excellent communication and documentation skills. Prior experience handling DOI filings, trend analysis, or rate development preferred.
    $68k-89k yearly est. 5d ago
  • Data Governance Analyst

    HD Supply 4.6company rating

    Atlanta, GA jobs

    Responsible for implementing the day-to-day needs of the data governance and data quality program. Participate in recommending and implementing policies and procedures for data governance approved by the Data Governance council and Data Governance team. Identify data quality opportunities and drive compliance with data governance and quality initiatives. Ensure data governance opportunities are identified and addressed throughout the project life cycle. Job Specific Responsibilities and Preferred Qualifications Preferred Qualifications - Remote East Coast Role Strong expertise in SQL with the ability to write, optimize, and troubleshoot complex queries for data extraction, analysis, and reporting. Proficiency in creating, testing, and troubleshooting regular expressions (Regex) for text parsing, validation, and pattern-based data manipulation. Experience with eCommerce analytics, including analyzing performance metrics and delivering actionable insights to improve business outcomes. Hands-on experience with data visualization tools (e.g., Tableau, Adobe Analytics) and preparing clear reports and presentations for leadership and cross-functional teams. Familiarity with cloud-based data platforms such as Snowflake, AWS Redshift, or Google BigQuery; Python for data manipulation and automation is a plus. Demonstrated ability to support data governance initiatives by applying policies, standards, and dashboards to monitor and improve data quality. Strong analytical and problem-solving skills with exceptional attention to detail, accuracy, and data integrity. Major Tasks, Responsibilities, and Key Accountabilities Participates in the execution and implementation of approved data definitions, policies, standards, process data access, and dashboard statistics. Supports requests to change configurations, use, or design of data elements for specific area of influence. Conducts testing and user acceptance of new system functionality. Analyzes and identifies data sources, data, redundancy, and implements processes to remediate data issues and /or data clean-up efforts. Supports governance principles, policies, and stewardship within the business. Assists in the development and distribution of data quality dashboard. Scopes, resources, and manages data quality initiatives. Participates in the review of all system enhancements and new technologies for data needs, use, and redundancies. Nature and Scope Demonstrates skill in data analysis techniques by resolving missing/incomplete information and inconsistencies/anomalies in more complex research/data. Nature of work requires increasing independence; receives guidance only on unusual, complex problems or issues. Work review typically involves periodic review of output by a supervisor and/or direct customers of the process. May provide general guidance/direction to or train junior level support or professional personnel. Work Environment Located in a comfortable indoor area. Any unpleasant conditions would be infrequent and not objectionable. Frequent periods are spent standing or sitting in the same location with some opportunity to move about. Occasionally there may be a requirement to stoop or lift light material or equipment (typically less than 8 pounds). Typically requires overnight travel less than 10% of the time. Education and Experience Typically requires BS/BA in a related discipline. Generally 2-5 years of experience in a related field OR MS/MA and generally 2-4 years of experience in a related field. Certification is required in some areas.
    $65k-94k yearly est. 3d ago
  • Data Scientist

    Otter 4.4company rating

    Mountain View, CA jobs

    The Opportunity As a Data Scientist at Otter.ai, you will be a key player within our Product Development organization. You will leverage your advanced data analytics skills and deep understanding of our products and market to drive strategic decision-making, optimize engagement and monetization strategies, and accelerate product adoption. Your work will significantly enhance customer experiences and contribute to our business success. If you are passionate about using data to drive business growth and want to be part of a team shaping the future of conversations, we invite you to apply and help us achieve our mission of making conversations more valuable. Your Impact * Collaborate with Product teams to understand business objectives and challenges, translating them into data-driven insights and recommendations. * Develop and implement predictive models, analytical tools, and methodologies to analyze product usage and customer behaviors. * Analyze large datasets to generate actionable insights that guide the development and optimization of engagement and monetization strategies. * Design and Execute experiments to test hypotheses and measure the effectiveness of various features, product experiences and strategies. * Partner with cross-functional teams, including Product Management and Engineering, to integrate data-driven insights into our products and services, driving continuous improvement and innovation. * Present findings and recommendations to key stakeholders, including executives, to inform strategic decision-making and shape the company's Product roadmaps. * Collaborate with Data Engineers to ensure data quality, accessibility, and reliability for analysis purposes. * Stay current with industry trends, emerging technologies, and best practices in data science, machine learning, and AI to drive innovation and competitiveness. We're Looking for Someone Who * 3+ years as a Data Scientist, in a software technology or AI-driven company. * Proficiency in SQL for querying large datasets, and expertise in Python and R for data manipulation, analysis, and visualization. * Strong background in data analysis, statistical modeling, and other data science techniques. * Familiarity with B2B SaaS business models. * Excellent ability to translate complex findings into actionable insights for non-technical stakeholders. * Proven ability to work effectively in cross-functional teams and manage multiple projects simultaneously. * Strong problem-solving skills and the ability to thrive in a fast-paced, dynamic start-up environment. About Otter.ai We are in the business of shaping the future of work. Our mission is to make conversations more valuable. With over 1B meetings transcribed, Otter.ai is the world's leading tool for meeting transcription, summarization, and collaboration. Using artificial intelligence, Otter generates real-time automated meeting notes, summaries, and other insights from in-person and virtual meetings - turning meetings into accessible, collaborative, and actionable data that can be shared across teams and organizations. The company is backed by early investors in Google, DeepMind, Zoom, and Tesla. Otter.ai is an equal opportunity employer. We proudly celebrate diversity and are committed to building an inclusive and accessible workplace. We provide reasonable accommodations for qualified applicants throughout the hiring process. Accessibility & Accommodations Otter.ai is committed to providing reasonable accommodations for candidates with disabilities in our hiring process. If you need assistance or an accommodation during any stage of the recruitment process, please contact *********** at least 3 business days before your interview. * Otter.ai does not accept unsolicited resumes from 3rd party recruitment agencies without a written agreement in place for permanent placements. Any resume or other candidate information submitted outside of established candidate submission guidelines (including through our website or via email to any Otter.ai employee) and without a written agreement otherwise will be deemed to be our sole property, and no fee will be paid should we hire the candidate. Salary range Salary Range: $155,000 to $185,000 USD per year. This salary range represents the low and high end of the estimated salary range for this position. The actual base salary offered for the role is dependent based on several factors. Our base salary is just one component of our comprehensive total rewards package. #LI-Hybrid
    $155k-185k yearly 3d ago
  • Data Scientist - MADE Operations

    New Balance 4.8company rating

    Lawrence, MA jobs

    Who We Are: Since 1906, New Balance has empowered people through sport and craftsmanship to create positive change in communities around the world. We innovate fearlessly, guided by our core values and driven by the belief that conventions were meant to be challenged. We foster a culture in which every associate feels welcomed and respected, where leaders and creatives are inspired to shape the world of tomorrow by taking bold action today. JOB MISSION: The MADE Operations Data Scientist will enable a more data-driven, digitally native culture across the MADE in USA team by applying advanced analytics, statistical modeling, and data science techniques to solve operational challenges. This role supports cross-functional operations-spanning product development, engineering, planning, purchasing, and logistics-by delivering actionable insights, leading metric development, and building predictive models and alerting systems. The ideal candidate combines technical excellence in data science and data engineering with a deep understanding of footwear and apparel manufacturing, and the ability to translate complex data into clear, impactful business decisions. MAJOR ACCOUNTABILITIES: Develop and maintain Power BI dashboards and reporting systems that surface key operational metrics and drive visibility across the MADE organization. Build and deploy analytical models to forecast performance, identify bottlenecks, and optimize processes across product development, manufacturing, planning, and logistics. Design and maintain semantic data models that enable scalable, self-service analytics and support advanced querying and analysis. Conduct exploratory data analysis (EDA) to uncover trends, anomalies, and opportunities for operational improvement. Collaborate with cross-functional stakeholders to frame business problems as data science questions and deliver actionable insights. Implement alerting and monitoring systems that proactively flag deviations in performance or quality metrics. Partner with IT and data engineering teams to ensure robust data pipelines and infrastructure that support modeling and reporting needs. Validate and improve data quality across systems such as ERP, MES, PLM, and other operational platforms. REQUIREMENTS FOR SUCCESS: Minimum 2 years of experience in a data analytics or data science role within a brand that manufactures apparel and/or footwear. Proficiency in Python and open-source data science tools (e.g., Pandas, NumPy, Scikit-learn, Jupyter). Strong experience with Power BI, including DAX, data modeling, and dashboard design. Familiarity with manufacturing, supply chain, or product development data and systems. Experience designing semantic layers and integrating data from multiple sources (e.g., ERP, MES, PLM). Regular Associate Benefits Our products are only as good as the people we hire, so we make sure to hire the best and treat them accordingly. New Balance offers a comprehensive traditional benefits package including three options for medical insurance as well as dental, vision, life insurance and 401K. We also proudly offer a slate of more nontraditional perks - opportunities like online learning and development courses, tuition reimbursement, $100 monthly student loan support and various mentorship programs - that encourage our associates to grow personally as they develop professionally. You'll also enjoy a yearly $1,000 lifestyle reimbursement, 4 weeks of vacations, 12 holidays and generous parental leave, because work-life balance is more than just a buzzword - it's part of our culture. Temporary associates are provided three options for medical insurance as well as dental and vision insurance and an associate discount. Part time associates are provided 401k, short term disability, a yearly $300 lifestyle reimbursement and an associate discount. Flexible Work Schedule For decades we have fostered a unique culture founded on our values with a particular focus on in-person teamwork and collaboration. Our North American hybrid model encourages rich in-person experiences, showcasing our commitment to teamwork and connection, while maintaining flexibility for associates. New Balance Associates currently work in office three days per week (Tuesday, Wednesday, and Thursday). Our offices are fully open, and amenities are available across our North American office locations. To continue our focus on hybrid work we have introduced “Work from Anywhere” (WFA) for four weeks per calendar year. This model will help us enhance our culture while continuing to maintain elements of flexibility. Equal Opportunity Employer New Balance provides equal opportunities for all current and prospective associates to ensure that employment, training, compensation, transfer, promotion and other terms, conditions and privileges of employment are provided without regard to race, color, religion, national origin, sex, sexual orientation, gender identity, age, handicap, genetic information and/or status as an Armed Forces service medal veteran, recently separated veteran, qualified disabled veteran or other protected veteran, or any other protected status.
    $91k-118k yearly est. Auto-Apply 55d ago
  • Data & Evaluation Applied AI Scientist

    SES 4.2company rating

    Woburn, MA jobs

    SES AI Corp. (NYSE: SES) is dedicated to accelerating the world's energy transition through groundbreaking material discovery and advanced battery management. We are at the forefront of revolutionizing battery creation, pioneering the integration of cutting-edge machine learning into our research and development. Our AI-enhanced, high-energy-density and high-power-density Li-Metal and Li-ion batteries are unique; they are the first in the world to utilize electrolyte materials discovered by AI. This powerful combination of "AI for science" and material engineering enables batteries that can be used across various applications, including transportation (land and air), energy storage, robotics, and drones. To learn more about us, please visit: ********** What We Offer: * A highly competitive salary and robust benefits package, including comprehensive health coverage and an attractive equity/stock options program within our NYSE-listed company. * The opportunity to contribute directly to a meaningful scientific project-accelerating the global energy transition-with a clear and broad public impact. * Work in a dynamic, collaborative, and innovative environment at the intersection of AI and material science, driving the next generation of battery technology. * Significant opportunities for professional growth and career development as you work alongside leading experts in AI, R&D, and engineering. * Access to state-of-the-art facilities and proprietary technologies are used to discover and deploy AI-enhanced battery solutions. What we Need: The SES AI Prometheus team is seeking an exceptional Data & Evaluation Applied AI Scientist to serve as the domain expert ensuring that SES AI's complex battery-domain knowledge is correctly represented and validated within advanced AI systems, including LLM pipelines and multi-agent workflows. This role is vital for bridging the gap between raw battery materials knowledge and structured, AI-trainable data. As the Data & Model Quality Manager, you will focus on the integrity, structure, and fidelity of the knowledge embedded within our AI systems. Essential Duties and Responsibilities: * Data Curation & Validation * Translate deep Battery Materials Knowledge and next-generation battery concepts into correctly structured, high-quality, AI-trainable data. * Lead processes for rigorous data validation, cleaning, and annotation to ensure consistency and correctness across all datasets. * Oversee the creation and management of benchmark datasets and design domain-specific multimodal evaluations to test model accuracy. * AI System Quality & Correctness * Partner closely with AI architecture and engineering teams to ensure the correctness, reliability, and scientific reasoning quality of models, including LLM creation and multi-agent orchestration. * Implement techniques, including those inspired by reinforcement learning (RLHF), to tune and validate model behavior against established scientific principles. * Ensure that resulting models accurately understand molecular chemistry, materials data, and complex scientific reasoning in the battery domain. * Strategy & Collaboration * Drive the application of Battery Informatics principles across all data pipelines and modeling effots. Education and/or Experience: * Education: Ph.D. in Chemical Engineering with a focus on Lithium battery systems, Materials Science, or a closely related computational/domain field. * Domain Expertise: Deep expertise in battery materials, particularly knowledge required to convert complex, real-world data into AI-trainable formats. * Data Quality & Validation: Proven experience in data validation, annotation, and benchmark creation for complex scientific or engineering datasets. * AI Exposure: Experience working with advanced AI systems, including familiarity with LLM pipelines and the principles of multi-agent orchestration. * Applicable Background: Experience in roles such as Applied Scientist in Molecular/Materials AI or similar specialist roles focused on AI system quality in a scientific domain. Preferred Qualifications: * Advanced AI Techniques: Experience with specialized techniques used for model tuning and alignment, such as Reinforcement Learning from Human Feedback (RLHF). * Industry Precedent: Previous experience in specialized environments like battery focus labs, materials data science groups, or AI4Science teams with a focus on agent pipeline building and model tuning (e.g., drawing from precedents like DeepMind or Fair Labs). * Evaluation Design: Direct experience designing and executing domain-specific multimodal evaluations for complex AI models. * Computational Focus: Experience as a Computational battery AI specialist.
    $82k-119k yearly est. Auto-Apply 16d ago
  • Data & Evaluation Applied AI Scientist

    SES 4.2company rating

    Boston, MA jobs

    SES AI Corp. (NYSE: SES) is dedicated to accelerating the world's energy transition through groundbreaking material discovery and advanced battery management. We are at the forefront of revolutionizing battery creation, pioneering the integration of cutting-edge machine learning into our research and development. Our AI-enhanced, high-energy-density and high-power-density Li-Metal and Li-ion batteries are unique; they are the first in the world to utilize electrolyte materials discovered by AI. This powerful combination of "AI for science" and material engineering enables batteries that can be used across various applications, including transportation (land and air), energy storage, robotics, and drones. To learn more about us, please visit: ********** What We Offer: A highly competitive salary and robust benefits package, including comprehensive health coverage and an attractive equity/stock options program within our NYSE-listed company. The opportunity to contribute directly to a meaningful scientific project-accelerating the global energy transition-with a clear and broad public impact. Work in a dynamic, collaborative, and innovative environment at the intersection of AI and material science, driving the next generation of battery technology. Significant opportunities for professional growth and career development as you work alongside leading experts in AI, R&D, and engineering. Access to state-of-the-art facilities and proprietary technologies are used to discover and deploy AI-enhanced battery solutions. What we Need: The SES AI Prometheus team is seeking an exceptional Data & Evaluation Applied AI Scientist to serve as the domain expert ensuring that SES AI's complex battery-domain knowledge is correctly represented and validated within advanced AI systems, including LLM pipelines and multi-agent workflows. This role is vital for bridging the gap between raw battery materials knowledge and structured, AI-trainable data. As the Data & Model Quality Manager, you will focus on the integrity, structure, and fidelity of the knowledge embedded within our AI systems. Essential Duties and Responsibilities: Data Curation & Validation Translate deep Battery Materials Knowledge and next-generation battery concepts into correctly structured, high-quality, AI-trainable data. Lead processes for rigorous data validation, cleaning, and annotation to ensure consistency and correctness across all datasets. Oversee the creation and management of benchmark datasets and design domain-specific multimodal evaluations to test model accuracy. AI System Quality & Correctness Partner closely with AI architecture and engineering teams to ensure the correctness, reliability, and scientific reasoning quality of models, including LLM creation and multi-agent orchestration. Implement techniques, including those inspired by reinforcement learning (RLHF), to tune and validate model behavior against established scientific principles. Ensure that resulting models accurately understand molecular chemistry, materials data, and complex scientific reasoning in the battery domain. Strategy & Collaboration Drive the application of Battery Informatics principles across all data pipelines and modeling effots. Education and/or Experience: Education: Ph.D. in Chemical Engineering with a focus on Lithium battery systems, Materials Science, or a closely related computational/domain field. Domain Expertise: Deep expertise in battery materials, particularly knowledge required to convert complex, real-world data into AI-trainable formats. Data Quality & Validation: Proven experience in data validation, annotation, and benchmark creation for complex scientific or engineering datasets. AI Exposure: Experience working with advanced AI systems, including familiarity with LLM pipelines and the principles of multi-agent orchestration. Applicable Background: Experience in roles such as Applied Scientist in Molecular/Materials AI or similar specialist roles focused on AI system quality in a scientific domain. Preferred Qualifications: Advanced AI Techniques: Experience with specialized techniques used for model tuning and alignment, such as Reinforcement Learning from Human Feedback (RLHF). Industry Precedent: Previous experience in specialized environments like battery focus labs, materials data science groups, or AI4Science teams with a focus on agent pipeline building and model tuning (e.g., drawing from precedents like DeepMind or Fair Labs). Evaluation Design: Direct experience designing and executing domain-specific multimodal evaluations for complex AI models. Computational Focus: Experience as a Computational battery AI specialist.
    $82k-119k yearly est. Auto-Apply 16d ago

Learn more about P&G jobs

View all jobs