Data Scientist
Data engineer job at P&G
Do you enjoy solving billion-dollar data science problems across trillions of data points? Are you passionate about working at the cutting edge of interdisciplinary boundaries, where computer science meets hard science? If you like turning untidy data into nonobvious insights and surprising business leaders with the transformative power of Artificial Intelligence (AI), including Generative and Agentic AI, we want you on our team at P&G.
As a Data Scientist in our organization, you will play a crucial role in disrupting current business practices by designing and implementing innovative models that enhance our processes. You will be expected to constructively research, design, and customize algorithms tailored to various problems and data types. Utilizing your expertise in Operations Research (including optimization and simulation) and machine learning models (such as tree models, deep learning, and reinforcement learning), you will directly contribute to the development of scalable Data Science algorithms. Your work will also integrate advanced techniques from Generative and Agentic AI to create more dynamic and responsive models, enhancing our analytical capabilities. You will collaborate with Data and AI Engineering teams to productionize these solutions, applying exploratory data analysis, feature engineering, and model building within cloud environments on massive datasets to deliver accurate and impactful insights. Additionally, you will mentor others as a technical coach and become a recognized expert in one or more Data Science techniques, quantifying the improvements in business outcomes resulting from your work.
Key Responsibilities:
+ Algorithm Design & Development: Directly contribute to the design and development of scalable Data Science algorithms.
+ Collaboration: Work closely with Data and Software Engineering teams to effectively productionize algorithms.
+ Data Analysis: Apply thorough technical knowledge to large datasets, conducting exploratory data analysis, feature engineering, and model building.
+ Coaching & Mentorship: Develop others as a technical coach, sharing your expertise and insights.
+ Expertise Development: Become a known expert in one or multiple Data Science techniques and methodologies.
Job Qualifications
Required Qualifications:
+ Education: Pursuing or has graduated with a Master's degree in a quantitative field (Operations Research, Computer Science, Engineering, Applied Mathematics, Statistics, Physics, Analytics, etc.) or possess equivalent work experience.
+ Technical Skills: Proficient in programming languages such as Python and familiar with data science/machine learning libraries like OpenCV, scikit-learn, PyTorch, TensorFlow/Keras, and Pandas. Demonstrated ability to develop and test code within cloud environments.
+ Communication: Strong written and verbal communication skills, with the ability to influence others to take action.
Preferred Qualifications:
+ Analytic Methodologies: Experience applying analytic methodologies such as Machine Learning, Optimization, Simulation, and Generative and Agentic AI to real-world problems.
+ Continuous Learning: A commitment to lifelong learning, keeping up to date with the latest technology trends, and a willingness to teach others while learning new techniques.
+ Data Handling & Cloud: Experience with large datasets and developing in cloud computing platforms such as GCP or Azure.
+ DevOps Familiarity: Familiarity with DevOps environments, including tools like Git and CI/CD practices.
Immigration Sponsorship is not available for this role. For more information regarding who is eligible for hire at P&G along with other work authorization FAQ's, please click HERE (******************************************************* .
Procter & Gamble participates in e-verify as required by law.
Qualified individuals will not be disadvantaged based on being unemployed.
We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Job Schedule
Full time
Job Number
R000135859
Job Segmentation
Entry Level
Starting Pay / Salary Range
$85,000.00 - $115,000.00 / year
FPGA Engineer
Euclid, OH jobs
Lincoln Electric is the world leader in the engineering, design, and manufacturing of advanced arc welding solutions, automated joining, assembly and cutting systems, plasma and oxy-fuel cutting equipment, and has a leading global position in brazing and soldering alloys. Lincoln is recognized as the Welding Expertâ„¢ for its leading materials science, software development, automation engineering, and application expertise, which advance customers' fabrication capabilities to help them build a better world. Headquartered in Cleveland, Ohio, Lincoln Electric is a $4.2B publicly traded company (NASDAQ:LECO) with over 12,000 employees around the world, with operations in 71 manufacturing and automation system integration locations across 21 countries and maintains a worldwide network of distributors and sales offices serving customers in over 160 countries.
Location: Euclid - 22801
Employment Status: Salary Full-Time
Function: Engineering
Pay Grade and Range: US10-E-31 Level III - Min: 105,560 - Mid: $124,188; Level IV: Min: $133,043 - Mid: $156,521
Bonus Plan: AIPAIP
Target Bonus: Level III-10%; Level IV - 15%
Purpose
Lincoln Electric is seeking a highly capable FPGA (Field-Programmable Gate Arrays) Design Engineer to join our R&D team. This role will focus on the architecture, design, and implementation of FPGA-based systems for embedded and high-performance applications. The ideal candidate will have deep experience with VHDL, timing analysis and closure, and integration of FPGA logic with ARM-based processing systems via AXI and other interconnect protocols. Familiarity with AMD (Xilinx), Intel (Altera), or Microchip (Microsemi) FPGA platforms is essential.
Duties and Responsibilities
FPGA Architecture & Design
Develop and maintain VHDL-based designs for control, signal processing, and communication subsystems.
Architect modular and reusable IP blocks for integration into complex FPGA systems.
Collaborate with hardware and software engineers to define functional requirements and partition logic between hardware, firmware, and software.
Timing Analysis & Closure
Perform static timing analysis and achieve timing closure across multiple clock domains.
Optimize designs for performance, area, and power using synthesis and place-and-route tools.
Debug timing violations and implement constraints using industry-standard tools.
Processor Interfacing & System Integration
Design and implement AXI-based interfaces to ARM processors and other embedded subsystems.
Integrate FPGA logic with SoC platforms and manage data flow between programmable logic and software.
Support development of device drivers and firmware for FPGA-accelerated functions.
Duties and Responsibilities (Continued)
Verification & Validation
Develop testbenches and simulation environments using VHDL.
Perform functional and formal verification of FPGA designs.
Support hardware bring-up and lab testing using logic analyzers, oscilloscopes, and JTAG tools.
Cross-Functional Collaboration
Work closely with embedded software, hardware, and systems teams to ensure seamless integration.
Participate in design reviews and contribute to system-level architecture decisions.
Document design specifications, test results, and performance metrics.
Innovation & Continuous Improvement
Stay current with FPGA technologies, high-level synthesis, and hardware acceleration trends.
Evaluate new tools, platforms, and methodologies to improve design efficiency and reliability.
Basic Requirements
Bachelor's degree in Electrical Engineering, Computer Engineering, or related field.
Level III: 5+ years of relevant experience.
Works independently: receives minimal guidance.
May lead projects or project steps within a broader project or have accountability for ongoing activities or objectives.
Level IV: 8+ years of relevant experience.
Recognized as an expert in own area within the organization.
Works independently, with guidance in only the most complex situations.
3+ years of experience in FPGA design and development using VHDL.
Proficiency with AMD/Xilinx, Intel/Altera, and/or Microchip/Microsemi FPGA platforms.
Strong understanding of timing analysis, constraints, and closure techniques.
Experience with AXI interconnects and integration with ARM-based processing systems.
Familiarity with simulation and verification tools such as VUnit or Vivado Simulator.
Hands-on experience with lab equipment such as oscilloscopes and logic analyzers.
Excellent problem-solving skills and ability to work in cross-functional teams.
Strong written and verbal communication skills.
Preferred Requirements
Experience with high-speed interfaces (e.g. PCI, Ethernet, DDR).
Knowledge of High-Level Synthesis tools.
Familiarity with embedded Linux and device driver development.
Understanding of security implications within FPGA-based embedded systems.
Experience with FPGA-based control systems and digital signal processing.
Lincoln Electric is an Equal Opportunity Employer. We are committed to promoting equal employment opportunity for applicants, without regard to their race, color, national origin, religion, sex (including pregnancy, childbirth, or related medical conditions, including, but not limited to, lactation), sexual orientation, gender identity, age, veteran status, disability, genetic information, and any other category protected by federal, state, or local law.
W2 only--Data Engineer--F2F Interview (No C2C)
Richmond, VA jobs
A Data Engineer with Python, PySpark, and AWS expertise is responsible for designing, building, and maintaining scalable and efficient data pipelines in cloud environment
Responsibilities:
Design, develop, and maintain robust ETL/ELT pipelines using Python and PySpark for data ingestion, transformation, and processing.
Work extensively with AWS cloud services such such as S3, Glue, EMR, Lambda, Redshift, Athena, and DynamoDB for data storage, processing, and warehousing.
Build and optimize data ingestion and processing frameworks for large-scale data sets, ensuring data quality, consistency, and accuracy.
Collaborate with data architects, data scientists, and business intelligence teams to understand data requirements and deliver effective data solutions.
Implement data governance, lineage, and security best practices within data pipelines and infrastructure.
Automate data workflows and improve data pipeline performance through optimization and tuning.
Develop and maintain documentation for data solutions, including data dictionaries, lineage, and technical specifications.
Participate in code reviews, contribute to continuous improvement initiatives, and troubleshoot complex data and pipeline issues
Required Skills:
Strong programming proficiency in Python, including libraries like Pandas and extensive experience with PySpark for distributed data processing.
Solid understanding and practical experience with Apache Spark/PySpark for large-scale data transformations.
Demonstrated experience with AWS data services, including S3, Glue, EMR, Lambda, Redshift, and Athena.
Proficiency in SQL and a strong understanding of data modeling, schema design, and data warehousing concepts.
Experience with workflow orchestration tools such as Apache Airflow or AWS Step Functions.
Familiarity with CI/CD pipelines and version control systems (e.g., Git).
Excellent problem-solving, analytical, and communication skills, with the ability to work effectively in a team environment.
Preferred Skills:
Experience with streaming frameworks like Kafka or Kinesis.
Knowledge of other data warehousing solutions like Snowflake
Thanks & regards,
K Hemanth | Recruitment Specialist
Email: ****************************
Azure Cloud Data Engineer
Charlotte, NC jobs
A leading global Bank is strengthening its Data Engineering & Production Support function, seeking an Azure Cloud Data Engineer to support mission-critical data pipelines, orchestrations, and ETL workloads within Azure. This is a hands-on, high-impact role focused on reliability, uptime, and seamless data movement across the enterprise.
About the Role:
You'll be the key specialist supporting Azure Data Factory and Databricks during a major cloud migration initiative. If you thrive in fast-paced production environments-solving pipeline failures, optimizing Spark jobs, and keeping complex data workflows running-this role gives you real responsibility, visibility, and technical ownership.
Responsibilities:
Support and troubleshoot Azure Data Factory pipelines and Databricks jobs across production environments.
Ensure stability, performance, and uptime of ETL/ELT workflows during cloud migration initiatives.
Perform root-cause analysis on pipeline failures, Spark issues, cluster timeouts, and data mismatches.
Collaborate with engineering and DevOps teams to optimize workflows, deployments, and cloud integrations.
Qualifications
Strong experience with Azure Data Factory, Databricks, and Azure-based data engineering.
Proven background in ETL/ELT processes, data integration, and orchestration.
Solid understanding of cloud data services (ADLS Gen2, Azure SQL, Databricks, Functions, etc.).
Hands-on experience in monitoring, debugging, and optimizing production data pipelines.
Required Skills
Azure Data Factory
Databricks
ETL/ELT processes
Data integration
Cloud data services
Preferred Skills
Experience with Azure SQL
Knowledge of ADLS Gen2
Familiarity with Azure Functions
Data Engineer
Montgomery, AL jobs
Role: Data Engineer (Builder)
Duration: Contract to Hire
Visa: USC or GC Only
Experience Needed
5-7 years in data engineering or database development.
Hands‑on experience with SQL Server ETL/ELT pipelines.
Experience integrating pipelines with cloud services (AWS Glue, Azure Data Factory, GCP Dataflow).
Familiarity with streaming technologies (Kafka, Kinesis).
Experience in data modeling and architecture design.
Proficiency in Python/Scala/Java programming for pipeline development.
Exposure to DevOps automation (Terraform, Ansible) and containerization (Docker, Kubernetes).
DevOps and automation maturity with certifications (HashiCorp Terraform Associate, AWS DevOps Engineer) and containerization (Docker, Kubernetes).
Preferred: Advanced programming depth with applied coursework or certifications (Python Institute PCPP, Scala Professional Certification).
Preferred: Data modeling specialization with advanced coursework or vendor‑specific training (Snowflake, AWS Big Data Specialty).
Education
Bachelor's degree in Computer Science, Software Engineering, or related technical field.
Certifications (Preferred)
AWS Certified Data Engineer
Azure Data Engineer Associate
Google Professional Data Engineer
Software Use
SQL Server (ETL/ELT pipelines, stored procedures).
Orchestration tools (Airflow, DBT).
Cloud integration services (AWS Glue, Azure Data Factory, GCP Dataflow).
Observability tools (OpenLineage, Monte Carlo).
DevOps automation tools (Terraform, Ansible).
Containerization platforms (Docker, Kubernetes).
Senior Data Engineer
Durham, NC jobs
We are seeking an experienced Senior Big Data & Cloud Engineer to design, build, and deliver advanced API and data solutions that support financial goal planning, investment insights, and projection tools. This role is ideal for a seasoned engineer with 10+ years of hands-on experience in big data processing, distributed systems, cloud-native development, and end-to-end data pipeline engineering.
You will work across retail, clearing, and custody platforms, leveraging modern cloud and big data technologies to solve complex engineering challenges. The role involves driving technology strategy, optimizing large-scale data systems, and collaborating across multiple engineering teams.
Key Responsibilities
Design and develop large-scale data movement services using Apache Spark (EMR) or Spring Batch.
Build and maintain ETL workflows, distributed pipelines, and automated batch processes.
Develop high-quality applications using Java, Scala, REST, and SOAP integrations.
Implement cloud-native solutions leveraging AWS S3, EMR, EC2, Lambda, Step Functions, and related services.
Work with modern storage formats and NoSQL databases to support high-volume workloads.
Contribute to architectural discussions and code reviews across engineering teams.
Drive innovation by identifying and implementing modern data engineering techniques.
Maintain strong development practices across the full SDLC.
Design and support multi-region disaster recovery (DR) strategies.
Monitor, troubleshoot, and optimize distributed systems using advanced observability tools.
Required Skills :
10+ years of experience in software/data engineering with strong big data expertise.
Proven ability to design and optimize distributed systems handling large datasets.
Strong communicator who collaborates effectively across teams.
Ability to drive architectural improvements and influence engineering practices.
Customer-focused mindset with commitment to delivering high-quality solutions.
Adaptable, innovative, and passionate about modern data engineering trends.
Senior Data Warehouse & BI Developer
San Leandro, CA jobs
About the Role
We're looking for a Senior Data Warehouse & BI Developer to join our Data & Analytics team and help shape the future of Ariat's enterprise data ecosystem. You'll design and build data solutions that power decision-making across the company, from eCommerce to finance and operations.
In this role, you'll take ownership of data modeling, and BI reporting using Cognos and Tableau, and contribute to the development of SAP HANA Calculation Views. If you're passionate about data architecture, visualization, and collaboration - and love learning new tools - this role is for you.
You'll Make a Difference By
Designing and maintaining Ariat's enterprise data warehouse and reporting architecture.
Developing and optimizing Cognos reports for business users.
Collaborating with the SAP HANA team to develop and enhance Calculation Views.
Translating business needs into technical data models and actionable insights.
Ensuring data quality through validation, testing, and governance practices.
Partnering with teams across the business to improve data literacy and reporting capabilities.
Staying current with modern BI and data technologies to continuously evolve Ariat's analytics stack.
About You
7+ years of hands-on experience in BI and Data Warehouse development.
Advanced skills in Cognos (Framework Manager, Report Studio).
Strong SQL skills and experience with data modeling (star schemas, dimensional modeling).
Experience building and maintaining ETL processes.
Excellent analytical and communication skills.
A collaborative, learning-oriented mindset.
Experience developing SAP HANA Calculation Views preferred
Experience with Tableau (Desktop, Server) preferred
Knowledge of cloud data warehouses (Snowflake, BigQuery, etc.).
Background in retail or eCommerce analytics.
Familiarity with Agile/Scrum methodologies.
About Ariat
Ariat is an innovative, outdoor global brand with roots in equestrian performance. We develop high-quality footwear and apparel for people who ride, work, and play outdoors, and care about performance, quality, comfort, and style.
The salary range for this position is $120,000 - $150,000 per year.
The salary is determined by the education, experience, knowledge, skills, and abilities of the applicant, internal equity, and alignment with market data for geographic locations. Ariat in good faith believes that this posted compensation range is accurate for this role at this location at the time of this posting. This range may be modified in the future.
Ariat's holistic benefits package for full-time team members includes (but is not limited to):
Medical, dental, vision, and life insurance options
Expanded wellness and mental health benefits
Paid time off (PTO), paid holidays, and paid volunteer days
401(k) with company match
Bonus incentive plans
Team member discount on Ariat merchandise
Note: Availability of benefits may be subject to location & employment type and may have certain eligibility requirements. Ariat reserves the right to alter these benefits in whole or in part at any time without advance notice.
Ariat will consider qualified applicants, including those with criminal histories, in a manner consistent with state and local laws. Ariat is an Equal Opportunity Employer and considers applicants for employment without regard to race, color, religion, sex, orientation, national origin, age, disability, genetics or any other basis protected under federal, state, or local law. Ariat is committed to providing reasonable accommodations to candidates with disabilities. If you need an accommodation during the application process, email *************************.
Please see our Employment Candidate Privacy Policy at ********************* to learn more about how we collect, use, retain and disclose Personal Information.
Please note that Ariat does not accept unsolicited resumes from recruiters or employment agencies. In the absence of a signed Agreement, Ariat will not consider or agree to payment of any referral compensation or recruiter/agency placement fee. In the event a recruiter or agency submits a resume or candidate without a previously signed Agreement, Ariat explicitly reserves the right to pursue and hire those candidate(s) without any financial obligation to the recruiter or agency. Any unsolicited resumes, including those submitted directly to hiring managers, are deemed to be the property of Ariat.
Vision Engineer
Dallas, TX jobs
$130-150k base salary
Hybrid - Joplin MO, Tuscaloosa AL, Dallas TX, Frederick MD, Phillipsburg KS, Knoxville TN
I'm currently partnering with a US-based building materials manufacturer who are looking for a Vision Engineer to lead the development and management oof their AI computer vision/machine vision solutions to ensure the quality of internal products.
Core responsibilities include:
Developing AI models and vision algorithms for inspection
Collecting datasets for robust models
Overseeing the cleaning/preprocessing of image data across a manufacturing environment
Incorporating normalization, noise reduction and data augmentation techniques to enhance data usability/model performance and maintain preprocessing pipelines
Use new data and feedback t continuously refine existing models to maximize efficiency
Designing validation tests to ensure consistent model performance in different scenarios and implementing frameworks to assess vision system performance
Key Skills & Experience:
Good programming skills with C++ and Python - will be exposed to some 3rd party software too
Crucial that this person has a manufacturing background (Automated Vehicle, Robotics and BioMedical experience is less relevant here)
Demonstrable experience with computer vision models and machine learning frameworks in a manufacturing setting
Ideally should have a good understanding of data augmentation techniques and different image formats, as well as having managed large datasets in the past
Bachelors Degree in Computer Science, Machine Learning or related fields
Interested in this role? Apply now or share a copy of your resume to ***************************
Senior Software Engineer
San Jose, CA jobs
Role: Senior Software Engineer
Must Have skills:
Senior Fullstack Developer with 12+ years of overall experience.
Strong expertise in React, TypeScript, Node.js, Java, and building scalable web applications.
Experience with MERN stack and developing React component libraries is required.
Ability to lead development efforts, drive architecture decisions, and manage large-scale projects.
Must have hands-on experience with cloud-based development (AWS) and modern software engineering practices.
Opportunity to own end-to-end design and contribute to a high-impact supply chain solution.
AWS DevOps Engineer
Reston, VA jobs
fAbout the Company
The Data Engineering and Advanced Analytics Enablement team is seeking an experienced AWS DevOps Engineer to lead the enablement of analytics tools on a cloud-native architecture.
About the Role
This role will design, build, and maintain infrastructure that supports next-generation analytics platforms, leveraging best practices in Infrastructure as Code (IaC), high availability, fault tolerance, and operational excellence for COTS deployments in AWS. Expertise in Amazon EKS, CI/CD automation, and scripting is essential.
Responsibilities
Drive technical requirements gathering and solution design sessions with engineers, architects, and product managers in an Agile environment.
Design, build, and deploy infrastructure for analytics COTS tools (e.g., Posit/RStudio, Anaconda Distribution) on AWS.
Develop and automate scripts for deploying and configuring software components, from large-scale COTS products to custom microservices.
Implement Infrastructure as Code using AWS services and tools such as Terraform, CloudFormation, CodePipeline, Lambda (R/Python), CloudWatch, Route53, S3, and more.
Architect and manage Amazon EKS clusters for containerized workloads, ensuring scalability and security.
Continuously improve infrastructure provisioning automation, CI/CD pipelines, and operational excellence in the cloud.
Monitor and optimize system performance in AWS for reliability, cost efficiency, and resilience.
Conduct peer reviews and enhance existing infrastructure for scalability and fault tolerance.
Participate in on-call rotations for critical outages and incidents.
Maintain comprehensive technical documentation, including system diagrams and operational procedures.
Qualifications
7+ years of IT experience, including software development and DevOps functions.
5+ years of experience building and maintaining CI/CD tooling (GitLab Pipelines, Jenkins Pipelines, Bitbucket, GitHub) and creating/extending CI/CD environments via Terraform and CloudFormation.
3+ years of production experience with core AWS services (EKS, EC2, S3, RDS, API Gateway, ALB, ELB, Lambda, etc.).
3+ years of hands-on experience with Amazon EKS and container orchestration.
3+ years of Unix/Linux system administration experience.
Proficiency in Python (preferred) and R.
Strong automation scripting skills in Bash, Shell, Python, and familiarity with Java, JavaScript, Ansible, Perl.
Experience supporting web technologies and websites running Apache or NGINX.
Familiarity with open-source web service environments (Java, REST, SOAP).
Working experience with Confluence, Jira SaaS, SharePoint, others.
Required Skills
Deep understanding of the AWS Well-Architected Framework.
Strong analytical, organizational, and problem-solving skills.
Excellent verbal and written communication abilities.
Effective teamwork, planning, and coordination skills.
Self-motivated, adaptable, and capable of meeting aggressive deadlines.
Ability to independently research and resolve technical challenges in complex IT environments.
Senior Software Engineer
Santa Rosa, CA jobs
Role: Senior Software Engineer
Must Have skills:
Senior Fullstack Developer with 12+ years of overall experience.
Strong expertise in React, TypeScript, Node.js, Java, and building scalable web applications.
Experience with MERN stack and developing React component libraries is required.
Ability to lead development efforts, drive architecture decisions, and manage large-scale projects.
Must have hands-on experience with cloud-based development (AWS) and modern software engineering practices.
Opportunity to own end-to-end design and contribute to a high-impact supply chain solution.
Senior Software Engineer
San Francisco, CA jobs
Role: Senior Software Engineer
Must Have skills:
Senior Fullstack Developer with 12+ years of overall experience.
Strong expertise in React, TypeScript, Node.js, Java, and building scalable web applications.
Experience with MERN stack and developing React component libraries is required.
Ability to lead development efforts, drive architecture decisions, and manage large-scale projects.
Must have hands-on experience with cloud-based development (AWS) and modern software engineering practices.
Opportunity to own end-to-end design and contribute to a high-impact supply chain solution.
Backend Engineer (Distributed Systems and Kubernetes)
Dallas, TX jobs
Software Engineer - Batch Compute (Kubernetes / HPC)
Dallas (Hybrid) | 💼 Full-time
A leading, well-funded quantitative research and technology firm is looking for a Software Engineer to join a team building and running a large-scale, high-performance batch compute platform.
You'll be working on modern Kubernetes-based infrastructure that powers complex research and ML workloads at serious scale, including contributions to a well-known open-source scheduling project used for multi-cluster batch computing.
What you'll be doing
• Building and developing backend services, primarily in Go (Python, C++, C# backgrounds are fine)
• Working on large-scale batch scheduling and distributed systems on Kubernetes
• Operating and improving HPC-style workloads, CI/CD pipelines, and Linux-based platforms
• Optimising data flows across systems using tools like PostgreSQL
• Debugging and improving performance across infrastructure, networking, and software layers
What they're looking for
• Strong software engineering background with an interest in Kubernetes and batch workloads
• Experience with Kubernetes internals (controllers, operators, schedulers)
• Exposure to HPC, job schedulers, or DAG-based workflows
• Familiarity with cloud platforms (ideally AWS), observability tooling, and event-driven systems
Why it's worth a look
• Market-leading compensation plus bonus
• Hybrid setup from a brand-new Dallas office
• Strong work/life balance and excellent benefits
• Generous relocation support if needed
• The chance to work at genuine scale on technically hard problems
If you're interested (or know someone who might be), drop me a message and I'm happy to share more details anonymously.
Senior Software Engineer
Fremont, CA jobs
Role: Senior Software Engineer
Must Have skills:
Senior Fullstack Developer with 12+ years of overall experience.
Strong expertise in React, TypeScript, Node.js, Java, and building scalable web applications.
Experience with MERN stack and developing React component libraries is required.
Ability to lead development efforts, drive architecture decisions, and manage large-scale projects.
Must have hands-on experience with cloud-based development (AWS) and modern software engineering practices.
Opportunity to own end-to-end design and contribute to a high-impact supply chain solution.
Senior Software Engineer
Springfield, VA jobs
Job Title: Senior Software Engineer
Security Clearance: Active TS/SCI (or SCI eligibility)
Omni Federal is a mid-size business focused on modern application development, cloud and data analytics for the Federal government. Our past performance is a mix of commercial and federal business that allows us to leverage the latest commercial technologies and processes and adapt them to the Federal government. Omni Federal designs, builds and operates data-rich applications leveraging advanced data modeling, machine learning and data visualization techniques to empower our customers to make better data-driven decisions
.
We are seeking a strong Software Engineer to support an NGA project in Springfield, VA. This is an exciting Modernization initiative where the NGA is embracing modern software development practices and using them to solve challenging missions & provide various capabilities for the NGA. This includes a modern technology stack, rapid prototyping in support of intelligence analysis products and capabilities, and culture of innovation. Candidates must be passionate, energized and excited to work on modern architectures and solve challenging problems for our clients.
Required Skills:
BS or equivalent in Computer Science, Engineering, Mathematics, Information Systems or equivalent technical degree.
10+ years of experience in software engineering/development, or a related area that demonstrates the ability to successfully perform the duties associated with this work.
Experience in Java or Python enterprise application development
Experience building high performance applications in React.js
Web services architecture, design, and development
Experience in PostgreSQL database design
Experience working in AWS and utilizing specific AWS tooling (S3)
Microsoft 365 Engineer
Dallas, TX jobs
Role : Microsoft 365 Engineer
Type: Contract
Description -
We are seeking a highly skilled Microsoft 365 Engineer with deep expertise in SharePoint development & administration, Power Platform solutions, and proven experience delivering SharePoint and Microsoft 365 migration projects. The ideal candidate combines a strong technical foundation with excellent problem-solving, communication, and cross-functional collaboration skills. This role plays a key part in designing, developing, migrating, integrating, and supporting Microsoft 365 solutions to enhance organizational productivity and collaboration.
Key Responsibilities
1. SharePoint Development & Administration
Design, develop, and deploy SharePoint Online solutions including sites, libraries, lists, content types, workflows, and modern page customizations.
Manage and configure SharePoint architecture, permissions, site collections, governance policies, and security structures.
Develop custom solutions using SPFx, PowerShell, PnP, JavaScript, and other relevant technologies.
Optimize search, content management, and information architecture.
Perform SharePoint environment health checks, performance tuning, and compliance/governance management.
2. Power Platform Development
Design and build end-to-end business solutions using Power Apps, Power Automate, and Power BI.
Create custom connectors, automation flows, forms, dashboards, and integrations aligned with business requirements.
Support and troubleshoot Power Platform applications and governance best practices.
Ensure solutions follow Microsoft best practices for scalability, performance, and security.
3. Microsoft 365 Migration & Integration
Lead or support end-to-end SharePoint migrations (on-premises to SharePoint Online, tenant-to-tenant, legacy platforms, etc.).
Execute Microsoft 365 migration tasks, including OneDrive, Teams, Exchange Online, and related workloads.
Utilize migration tools such as ShareGate, Metalogix, AvePoint, or Microsoft-native tooling.
Integrate Microsoft 365 components with third-party applications, line-of-business systems, and internal applications.
Plan and implement data structure mapping, content assessment, remediation, and validation.
4. Microsoft 365 Administration & Support
Provide Tier 2/3 support for all Microsoft 365 services including SharePoint, Teams, OneDrive, Power Platform, and Azure AD.
Monitor service health, security alerts, compliance notifications, and tenant configurations.
Manage identity, access, conditional access policies, and license administration.
Educate and support end-users, create documentation, and develop training materials as needed.
Troubleshoot issues across Microsoft 365 ecosystem and execute root cause analysis.
Required Qualifications
Bachelor's degree in computer science, Information Technology, or related field (or equivalent experience).
5+ years hands-on experience with SharePoint Online and Microsoft 365 administration.
Strong proficiency in SharePoint development (SPFx, JavaScript, PowerShell, PnP, APIs).
Demonstrated experience performing SharePoint migrations and Microsoft 365 integrations.
Proven ability to design solutions using Power Apps, Power Automate, and Power Platform governance.
Experience with Azure AD, Exchange Online, Teams administration, and Microsoft 365 security concepts.
Familiarity with Agile methodologies and working in cross-functional project teams.
Preferred Qualifications
Microsoft certifications such as:
Microsoft 365 Certified: Developer Associate
Microsoft 365 Certified: Administrator Associate
Power Platform Developer/Functional Consultant
SharePoint Associate
Experience with REST APIs, Graph API, .NET, and scripting languages.
Knowledge of compliance, retention, DLP, and governance frameworks.
Experience in enterprise-level operational support environments.
Soft Skills
Strong communication and client-facing interpersonal skills.
Detail-oriented with excellent documentation habits.
Ability to manage multiple projects and deadlines in a dynamic environment.
Strong analytical and troubleshooting capabilities.
Sr. Software Developer
Nashville, TN jobs
Sr. Software Developer
Type: Permanent/ Full Time / Direct Hire
Immediate start
Required:
Bachelor's degree in STEM (Science, Technology, Engineering, Math)
Minimum 8 years of software development experience
AWS, Azure, Docker, HubSpot, Kubernetes
Python, Java, Javascript, SQL, HTML
Senior Market Risk Application Developer (Java/Groovy - Capital Markets)
New York, NY jobs
The ideal candidate will have strong hands-on development experience, deep understanding of market risk and fixed-income products, and the ability to integrate with vendor systems such as Murex, Polypaths, ION, Bloomberg, etc.
Key Responsibilities
Analyze, design, develop, deploy, and maintain software applications supporting Capital Markets business units.
Participate in end-to-end development: requirement gathering, design, coding, testing, deployment, and documentation.
Develop and integrate Java/Groovy components within complex fixed-income and risk technology stacks.
Collaborate with business and technology teams to support market risk and credit risk functions.
Integrate with vendor systems such as ION, Bloomberg, Polypaths, Murex.
Perform system analysis, detailed design, component testing, integration testing, and quality assurance.
Ensure compliance with system requirements, business objectives, and security standards.
Drive technical excellence, best practices, and continuous improvement.
Support multiple projects simultaneously and provide advanced technical guidance where needed.
Contribute to the design and delivery of complex software solutions for capital markets.
Required Skills & Abilities
Strong hands-on proficiency in Java and/or Groovy, with solid experience in systems integration.
Additional scripting experience with Python.
Strong proficiency in SQL and experience with distributed multi-tier applications.
Experience with development tools/frameworks such as Git, Gradle, Camel, Kafka.
Familiarity with AWS services (EC2, S3).
Solid understanding of SDLC methodologies (Agile & Waterfall).
Strong knowledge of fixed-income products, trade flows, valuations, and risk management.
Experience integrating with systems such as ION, Bloomberg, Polypaths, Murex.
Strong analytical, problem-solving, communication, and presentation skills.
Understanding of core Computer Science fundamentals:
Web development
Service-oriented architecture
Cloud computing
Test-driven development
Domain-driven design
Education
Bachelor's degree in Computer Science, Information Technology, or a related field.
Equivalent work experience accepted.
Work Experience
10+ years of experience in Information Technology, Software/Application Development, or Capital Markets technology.
Experience developing solutions for risk, pricing, or trading environments is highly preferred.
Software Engineer - Intelligent Systems
Berkeley, CA jobs
Compensation: Up to $135K base salary
My client is a Series C renewable-energy automation unicorn, founded in 2019 and backed by more than $200M in funding. They are building intelligent systems that transform how large-scale renewable energy projects are designed and delivered. They're hiring a Software Engineer - Intelligent Systems to develop AI-powered tools using Azure OpenAI, AWS Bedrock, and AgentCore to automate complex engineering workflows. This role is ideal for a recent M.S. or Ph.D. graduate passionate about AI, automation, and multi-cloud technologies.
What You'll Do
Build AI-driven automation workflows and reasoning chains
Develop LLM-based agents with Azure OpenAI and AWS Bedrock
Work on retrieval systems and Document AI integrations
Deploy and optimize agents across Azure, AWS, edge, and on-prem environments
Translate engineering workflows into intelligent systems
Test, validate, and document system behavior
What We're Looking For
Bachelor's or Master's in CS, AI, Computational Linguistics, or related field (M.S./Ph.D. preferred)
Experience with AI/ML, NLP, or intelligent systems
Strong Python programming skills
Familiarity with frameworks like LangChain or LangGraph
Exposure to Azure OpenAI, AWS Bedrock, and AgentCore
Understanding of REST APIs, asynchronous programming, and data integration
Senior Palantir Foundry Developer
Reston, VA jobs
The Senior Palantir Foundry Developer will design, develop, and deploy advanced data integration and analytics solutions using the Palantir Foundry platform. This role requires deep technical expertise in Foundry's ecosystem, strong data engineering skills, and the ability to translate complex business requirements into scalable, secure, and performant solutions. The developer will also mentor junior team members and collaborate with cross-functional stakeholders to deliver impactful data-driven applications.
Skills:
Solution Design & Development:
Build and optimize data pipelines, Ontology models, and Foundry applications (Workshop, Contour, Quiver, Slate).
Develop custom workflows and dashboards using Foundry's suite of tools.
Data Integration & Transformation:
Implement robust ingestion strategies for structured and unstructured data.
Apply PySpark, SQL, and Foundry transformations for data cleansing and enrichment.
Application Development:
Create operational workflows and user-facing applications within Foundry.
Integrate Foundry with cloud services (AWS, Azure, GCP) and external APIs.
Governance & Security:
Ensure compliance with data governance, lineage, and security standards (RBAC, encryption).
Technical Leadership:
Act as a subject matter expert for Palantir Foundry.
Provide mentorship and enforce best practices in development and deployment.
Innovation:
Explore and implement GenAI/LLM capabilities within Palantir AIP for advanced analytics.
Stay updated on Foundry features and drive adoption of new functionalities.