Senior Data Engineer
Remote job
Role: Senior Data Engineer(Data Quality Framework Team)
Location: Hybrid(3 days onsite, 2 days' work from Home) Pittsburgh/Cleveland/Dallas/Birmingham, AL/Phoenix
Duration: Full Time
Experience: 7 - 10 Years
Design and build scalable data pipelines using PySpark, SQL, Hadoop.
Develop and implement data quality rules, validation checks, and monitoring dashboards.
Collaborate with data architects, analysts, and QE engineers to ensure end-to-end data integrity.
Establish coding standards, reusable components, and version control practices for data engineering workflows.
Optimize performance of ETL/ELT processes and troubleshoot data issues in production environments.
Support regulatory compliance and data governance by integrating data lineage, metadata, and audit capabilities.
ETL Informatica IICS Developer-12 Months Contract -Remote opportunity-Direct Customer.
Remote job
Greetings from Accion Labs,
Our direct Client is looking for ETL Informatica IICS Developer-12 Months Contract -Remote opportunity-Direct Customer.
Primary skills :Data Engineering & ETL/ELT ,ODI or Informatica Cloud (IICS) ,SQL / PL-SQL, Informatica IICS
Job Description:
The ETL engineer will install, test, and maintain ETL jobs and processes,
•5 years' experience on IICS Development and support
•Troubleshoot and resolve production issues and provide high-level support on system software
•Part of the production support team spanning multiple time zones and geographies
•Coordinate with internal IT teams to analyze and resolve production process failures
•Prepare and execute processes to correct data discrepancies in reporting tables
•Provide 24X7 on-call support on a rotation basis
•Ensure all service level objectives are achieved or exceeded
•Join conference calls with other IT departments to support recovery from outages
•Perform release management and post-implementation tasks for software releases to production environments
•Respond to business user requests regarding data issues and outages
•Provide feedback to Application Development teams regarding opportunities to make code more reliable, faster, and easier to maintain
•Provide technical analysis to help debug issues, perform root cause analysis and eliminate repeated incidents
•Collaborate with team members to resolve complex issues to assure the successful delivery of IT solutions
•Automate manual repeatable tasks Develop and maintain documentation, technical procedures, and user guides
Education:
Bachelor s degree in computer science, information Systems, or related discipline.
This role is open to W2 or those seeking Corp-Corp employment.
The salary range for this role is 90-100 k/annum or Corp-Corp rates please contact the recruiter.
In addition to other benefits, Accion Labs offers a comprehensive benefits package, with Accion covering 65% of the medical, dental, and Vision Premiums for employees, their spouses, and dependent children enrolling in the Accion-provided plans.
ETL/ELT Data Engineer (Secret Clearance) - Hybrid
Remote job
LaunchCode is recruiting for a Software Data Engineer to work at one of our partner companies!
Details:
Full-Time W2, Salary
Immediate opening
Hybrid - Austin, TX (onsite 1-2 times a week)
Pay $85K-$120K
Minimum Experience: 4 years
Security Clearance: Active DoD Secret Clearance
Disclaimer: Please note that we are unable to provide work authorization or sponsorship for this role, now or in the future. Candidates requiring current or future sponsorship will not be considered.
Job description
Job Summary
A Washington, DC-based software solutions provider founded in 2017, specializes in delivering mission-critical and enterprise solutions to the federal government. Originating from the Department of Defense's software factory ecosystem, the company focuses on Command and Control, Cybersecurity, Space, Geospatial, and Modeling & Simulation. The company leverages commercial technology to enhance the capabilities of the DoD, IC, and their end-users, with innovation driven by its Innovation centers. The company has a presence in Boston, MA, Colorado Springs, CO, San Antonio, TX, and St. Louis, MO.
Why the company?
Environment of Autonomy
Innovative Commercial Approach
People over process
We are seeking a passionate Software Data Engineer to support the Army Software Factory (ASWF) in aligning with DoDM 8140.03 Cyber Workforce requirements and broader compliance mandates. The Army Software Factory (ASWF), a first-of-its-kind initiative under Army Futures Command, is revolutionizing the Army's approach to software development by training and employing self-sustaining technical talent from across the military and civilian workforce. Guided by the motto “By Soldiers, For Soldiers,” ASWF equips service members to develop mission-critical software solutions independently-especially vital for future contested environments where traditional technical support may be unavailable. This initiative also serves as a strategic prototype to modernize legacy IT processes and build technical readiness across the force to ensure battlefield dominance in the digital age.
Required Skills:
Active DoD Secret Clearance (Required)
4+ years of experience in data science, data engineering, or similar roles.
Expertise in designing, building, and maintaining scalable ETL/ELT pipelines using tools and languages such as Python, SQL, Apache Spark, or Airflow.
Strong proficiency in working with relational and NoSQL databases, including experience with database design, optimization, and query performance tuning (e.g., PostgreSQL, MySQL, MongoDB, Cassandra).
Demonstrable experience with cloud data platforms and services (e.g., AWS Redshift, S3, Glue, Athena; Azure Data Lake, Data Factory, Synapse; Google BigQuery, Cloud Storage, Dataflow).
Solid understanding of data warehousing concepts (e.g., Kimball, Inmon methodologies) and experience with data modeling for analytical purposes.
Proficiency in at least one programming language commonly used in data engineering (e.g., Python, Java, Scala) for data manipulation, scripting, and automation.
CompTIA Security+ Certified or otherwise DoDM 8140.03 (formerly DoD 8570.01-M) compliant.
Nice to Have:
Familiarity with SBIR technologies and transformative platform shifts
Experience working in Agile or DevSecOps environments
2+ years of experience interfacing with Platform Engineers and data visibility team, manage AWS resources, and GitLab admin
#LI-hybrid #austintx #ETLengineer #dataengineer #army #aswf #clearancejobs #clearedjobs #secretclearance #ETL
Senior Data Engineer
Remote job
Senior Data Engineer - Integration Engineer - (needed DataStage ver. 11+ is must)
Fully Remote
32+ Months
Pay Rate - $Best/-hr on W2
Needed atleast 10+ years work experience
Top 3-5 Must Haves
Experience working with large-scale data pipelines and cloud infrastructure ( cloud ETL tools like Glue etc., and Data Warehousing solutions like Redshift etc., )
Knowledge of Deploying and maintaining cloud-based infrastructure for data workflows (e.g., AWS, GCP, Azure, RedShift)
2 years of technical expertise in Cloud applications, Data ingestion, and Databricks Data Lakehouse platform
4 years of extensive hands-on experience in building ETL interfaces using DataStage version 11.7 to aggregate, cleanse and migrate data across enterprise-wide Big Data, and Data Warehousing systems using staged data processing techniques, patterns, and best practices.
Combined 4 years of experience of advanced SQL and stored procedures (in DB2, SQL Server, and Oracle Database Platforms) with hands-on experience of designing solutions for optimal performance and handling other non-functional aspects of availability, reliability, and security of DATASTAGE ETL Platform.
DELIVERABLES OR TASKS:
Recommend the good practice on ETL process to extract data from disparate source transaction systems, transform and enrich the data, and load it into specific data models that are optimized for analysis and reporting.
Facilitate the best-in-class data integration platform.
Recommend and implement features in Data Stage 11.7 for Optimization.
Recommend the training and mentoring plans for the transition of knowledge to current employees.
Delivery of quality results of tasks and assignments.
TECHNICAL KNOWLEDGE AND SKILLS:
Strong analytical skills with the ability to analyze information identify and formulate solutions to problems. Provides more in-depth analysis with a high-level view of goals and end deliverables.
Over 5 years of proven work experience in DataStage, with version 11.0 or above, including over two years of DataStage version 11.7 is a must.
Over 3 years of proven work experience on scripting languages such as Perl, Shell, and Linux/Unix servers, files structure, and scheduling.
Extensive hands-on experience in building ETL interfaces using DataStage version 11.7 to aggregate, cleanse and migrate data across enterprise-wide Big Data, and Data Warehousing systems using staged data processing techniques, patterns and best practices.
We are looking for a Data Engineer in Austin, TX (fully remote - MUST work CST hours).
Job Title: Data Engineer
Contract: 12 Months
Hourly Rate: $75- $82 per hour (only on W2)
Additional Notes:
Fully remote - MUST work CST hours
SQL, Python, DBT, Utilize geospatial data tools (PostGIS, ArcGIS/ArcPy, QGIS, GeoPandas, etc.) to optimize and normalize spatial data storage, run spatial queries and processes to power analysis and data products
Design, create, refine, and maintain data processes and pipelines used for modeling, analysis, and reporting using SQL (ideally Snowflake and PostgreSQL), Python and pipeline and transformation tools like Airflow and dbt
• Conduct detailed data research on internal and external geospatial data (POI, geocoding, map layers, geometrics shapes), identify changes over time and maintain geospatial data (shape files, polygons and metadata)
• Operationalize data products with detailed documentation, automated data quality checks and change alerts
• Support data access through various sharing platforms, including dashboard tools
• Troubleshoot failures in data processes, pipelines, and products
• Communicate and educate consumers on data access and usage, managing transparency in metric and logic definitions
• Collaborate with other data scientists, analysts, and engineers to build full-service data solutions
• Work with cross-functional business partners and vendors to acquire and transform raw data sources
• Provide frequent updates to the team on progress and status of planned work
About us:
Harvey Nash is a national, full-service talent management firm specializing in technology positions. Our company was founded with a mission to serve as the talent partner of choice for the information technology industry.
Our company vision has led us to incredible growth and success in a relatively short period of time and continues to guide us today. We are committed to operating with the highest possible standards of honesty, integrity, and a passionate commitment to our clients, consultants, and employees.
We are part of Nash Squared Group, a global professional services organization with over forty offices worldwide.
For more information, please visit us at ******************************
Harvey Nash will provide benefits please review: 2025 Benefits -- Corporate
Regards,
Dinesh Soma
Recruiting Lead
iOS Developer (Hybrid), Only W2
Remote job
Hi,
We are looking for iOS Developer (Hybrid). Please let me know if interested and share your resume.
Job Title: iOS Developer (Hybrid), Only W2
Scope:
• Design, build, and release iOS features for a mobile casting application, primarily in Objective-C.
• Implement Terms of Service support within the iOS platform.
• Lead the development of Wake on LAN capabilities for casting to sleeping devices.
• Work closely with Android counterparts to ensure platform parity and a cohesive user experience.
• Participate in architectural planning, feature design, and iterative development cycles.
• Deliver high-quality code on tight timelines, particularly in preparation for Q3 milestones.
Required:
• 7+ years of hands-on iOS development experience with multiple production app releases.
• Deep experience coding in Objective-C within production environments.
• Strong understanding of iOS networking and low-level network behaviors.
• Ability to define and implement features from scratch, including architecture and design.
• Comfortable navigating fast-moving development environments with evolving requirements.
Pluses:
• Experience working on casting technologies or media platforms (e.g., AirPlay, Roku, Fire TV).
• Familiarity with Wake on LAN, socket communication, or custom network protocols.
• Exposure to cross-platform development workflows that support both Android and iOS.
• Experience working with non-Android TV platforms (e.g., LG web OS, Tizen, Linux-based receivers).
• Prior contributions to media playback SDKs, streaming features, or session handoff experiences.
• Familiarity with terms of service, user consent flows, or related UX on mobile platforms.
Regards,
Praveen Vasala
Email ID: **************************
Contact: ***************** Ext: 122
Senior SAP Developer - ETL / REMOTE
Remote job
Robinson Group has been retained to fill a newly created role in a newly created team- a Senior SAP Developer (ETL) - real REMOTE
Technically strong team that is using innovative approaches, the latest technology, and strong collaboration.
*This fully remote position will be part of a $17B organization but has the flexibility and mindset of a start up organization.
*Growing, smart, and fully supported team that will have you leading the integration of SAP data primarily from SAP ECC and SAP S/4 HANA-into a unified, cloud-based Enterprise Data Platform (EDP).
This role needs deep expertise in SAP data structures, combined with strong experience in enterprise ETL development using cloud-native technologies.
As a Senior SAP Developer (ETL), you will play a key role in designing and implementing scalable data pipelines that extract, transform, and harmonize data from SAP systems into canonical models for analytics, reporting, and machine learning use cases.
You will partner closely with data engineers, architects, and SAP subject matter experts to ensure accuracy, performance, and alignment with business requirements.
This role will support a variety of high-impact projects focused on enabling cross-ERP visibility, operational efficiency, and data-driven decision-making across finance, manufacturing, and supply chain functions.
Your contributions will help standardize critical datasets and accelerate the delivery of insights across the organization.
Your skillset:
Strong experience in SAP ECC and SAP HANA
SAP Datasphere (building ETL pipelines)
Architect and implement ETL pipelines to extract data from SAP ECC / HANA / Datasphere
Design and build robust, scalable ETL/ELT pipelines to ingest data into Microsoft cloud using tools such as Azure Data Factory, or Alteryx.
Analyze/interpret SAP's internal data models while working also closely with both SAP functional and technical teams
Lead the end to end data integration process for SAP ECC
Leverage knowledge of HANA DW to support reporting and semantic modeling
Strong communication capabilities as it relates to interfacing with supply chain and finance business leaders
Strong cloud knowledge (Azure is preferable, GCP, AWS, Fabric)
Ability to model data/ modeling skills
Expose/experience with Python (building data transformations in SQL and Python)
Your background:
Bachelor's degree in Computer Science, Data Science, Information Systems, or a related field.
10 years of IT experience, with 8 years of SAP experience (SAP ECC and SAP S/4HANA).
Hands-on experience with Azure cloud data services including Synapse Analytics, Data Lake Storage, SQL DB.
Experience building cloud-native applications, for example with Microsoft Azure, AWS or GCP
Remote Sr. SQL Database Developer Job :
Remote job
Remote Sr. SQL Database Developer Job in Rochester, NY:
Direct Hire salary Range: $100,000 - $140,000 based on based on experience, education, geographic location and other factors.
Please no 3rd party or C2C candidates
The senior developer will design, develop and test complex data-driven business logic using stored procedures, functions, views and tables. In-depth knowledge of performance tuning, data modeling and database design concepts are key aspects of this position.
Responsibilities of the Remote Sr. SQL Database Developer Job in Rochester, NY:
Design logical and physical data models, preparing and presenting statistical information for both internal and external use
Extensive experience with Microsoft SQL Server
Ensure database optimization, integrity, consistency, security and privacy
Providing support, guidance and collaborating with Application Developers to implement database design, and review developer's work to ensure correct implementation
Create scripts to build new database objects
Develop stored procedures, functions, packages, triggers and views using SQL
Assist with schema design, code review and SQL query tuning
Participate in SQL code reviews, write and deploy SQL patches, and gain a deeper understanding of mirroring and SQL clustering
Qualifications of the Remote Sr. SQL Database Developer Job in Rochester, NY:
5+ years working as a database developer, database engineer or in a related role
5 -7 years of SQL experience
2 or more years of handling a database environment with strong data analysis and analytical skills
SQL server administration experience, including knowing the basics of running Microsoft SQL Server - users, permission, backups, recovery, monitoring, and more
Database tuning experience, database integration design and implementation, and management of database projects
Ability to work with a team in an Agile environment-you can address bugs with QA, plan schemas with engineering, and respond quickly to other business needs
Knowledge and know-how to troubleshoot potential issues, and experience with best practices around database operations
Power BI / data warehouse experience is a plus
Highly organized and self-motivated with the ability to prioritize projects, meet deadlines
Benefits Offering:
Medical, dental, vision insurance coverage.
Retirement Savings
Paid holidays and generous paid time off.
For more information or to be considered for the Sr. SQL Database Developer Job in Rochester, NY please contact Thomas McCarthy at ***************************
Senior HL7 Interface Developer (Remote)
Remote job
A large health services network is actively seeking a new HL7 Interface Developer to join their staff in a Senior-level Remote opportunity.
About the Opportunity:
Schedule: Monday to Friday
Hours: 8am to 5pm (EST)
Responsibilities:
Design, develop, test, and deploy HL7 interfaces using integration engines (e.g., Cloverleaf, InterSystems Ensemble/IRIS).
Translate functional requirements into technical specifications for interface development.
Build and maintain real-time, batch, and API-based integrations adhering to HL7 standards (v2.x, v3, FHIR).
Develop robust workflows for ADT, ORM, ORU, SIU, MDM, DFT, and other standard HL7 message types.
Perform other duties, as needed
Qualifications:
2+ years of experience developing HL7 interfaces within a healthcare environment
Must be Epic Bridges Certified
Experience developing interfaces in IRIS (formally known as Ensemble) Intersystem integration engine
Experience in data conversion, converting historical clinical data into Epic
Ability to build interfaces between Epic and third-party applications
Salesforce Developer (St. Pete, FL) #983663
Remote job
W2 ONLY, Client CANNOT do Sponsorship
Salesforce Developer
Duration: Direct Hire
We are seeking an experienced Salesforce Developer/Administrator to support, enhance, and optimize our Salesforce platform. This role will focus on designing and building scalable solutions using Apex, Lightning Components, Visualforce, API integrations, and standard Salesforce configuration. The ideal candidate is a hands-on Salesforce expert who can collaborate with business stakeholders, translate requirements into technical solutions, and ensure the platform operates efficiently and reliably.
This is a remote position.
Key Responsibilities
Salesforce Development
Design, develop, test, and deploy custom solutions using Apex classes, triggers, Lightning Web Components (LWC), Aura components, and Visualforce pages.
Develop and maintain API integrations between Salesforce and external systems (REST/SOAP APIs).
Build and optimize declarative functionality including flows, validation rules, process automation, and page layouts.
Salesforce Administration
Manage day-to-day Salesforce operations including user setup, permissions, roles, profiles, and security settings.
Maintain and optimize objects, fields, workflows, approval processes, reports, and dashboards.
Troubleshoot issues, provide user support, and ensure data integrity across the platform.
System Enhancements & Projects
Translate business requirements into Salesforce solutions through configuration or custom development.
Participate in full SDLC processes including requirements gathering, technical design, development, testing, and deployment.
Support Salesforce releases by testing new features, identifying impacts, and implementing updates.
Collaboration & Documentation
Work closely with stakeholders across Sales, Marketing, Customer Support, and IT to enhance Salesforce functionality.
Create and maintain technical documentation, data flow diagrams, and system configuration records.
Provide training and guidance on Salesforce best practices and new features.
Qualifications
3+ years of hands-on Salesforce development and administration experience.
Strong experience with:
Apex (classes, triggers, batch jobs, schedulers)
Lightning Web Components (LWC) and/or Aura Components
Visualforce
REST & SOAP APIs / integration patterns
Proficiency in Object-Oriented Programming (OOP) concepts.
Strong understanding of Salesforce data structures, security model, and declarative capabilities.
Experience working in an Agile or iterative development environment.
Salesforce certifications such as Platform Developer I/II, Admin, or App Builder are a plus.
Key Competencies
Strong problem-solving and troubleshooting skills
Ability to communicate clearly to both technical and non-technical audiences
Highly organized and able to manage multiple projects
Self-driven and comfortable working remotely
Database developer Remote
Remote job
Database developer to support front end systems (as needed by developers across the organization, in support of web services, third party, or internal development needs) to the exclusion of reporting needs by other departments. Developed code includes but is not limited to PL/SQL in the form of Triggers, Procedures, Functions, & Materialized Views. Generates custom driven applications for intra-department use for business users in a rapid application development platform (primarily APEX). Responsible for functional testing and deployment of code through the development life cycle. Works with end-users to obtain business requirements. Responsible for developing, testing, improving, and maintaining new and existing processes to help users retrieve data effectively. Collaborates with administrators and business users to provide technical support and identify new requirements.
Responsibilities
Responsibilities:
Design stable, reliable and effective database processes.
Solve database usage issues and malfunctions.
Gather user requirements and identify new features.
Provide data management support to users.
Ensure all database programs meet company and performance requirements.
Research and suggest new database products, services, and protocols.
Requirements and skills
In-depth understanding of data management (e.g. permissions, security, and monitoring)
Excellent analytical and organization skills
An ability to understand front-end user requirements and a problem-solving attitude
Excellent verbal and written communication skills
Assumes responsibility for related duties as required or assigned.
Stays informed regarding current computer technologies and relational database management systems with related business trends and developments.
Consults with respective IT management in analyzing business functions and management needs and seeks new and more effective solutions. Seeks out new systems and software that reduces processing time and/or provides better information availability and decision-making capability.
Job Type: Full-time
Pay: From $115,000- 128,000 yearly
Expected hours: 40 per week
Benefits:
Dental insurance
Health insurance
Paid time off
Vision insurance
Paid time off (PTO)
Various health insurance options & wellness plans
Required Knowledge
Considerable knowledge of on-line and design of computer applications.
Require Experience
One to three years of database development/administration experience.
Skills/Abilities
Strong creative and analytical thinking skills.
Well organized with strong project management skills.
Good interpersonal and supervisory abilities.
Ability to train and provide aid others.
Backend Developer - Database - USA(Remote)
Remote job
Greetings Everyone
Who are we?
For the past 20 years, we have powered many Digital Experiences for the Fortune 500. Since 1999, we have grown from a few people to more than 4000 team members across the globe that are engaged in various Digital Modernization. For a brief 1 minute video about us, you can check *****************************
What will you do? What are we looking for?
Requirement is for a DB/BE Candidate with strong in SQL and PL/SQL skills Position Summary
We are seeking a highly skilled backend-focused Staff Software Engineer to join our team. The ideal candidate will have extensive experience in backend development, system design, and a strong understanding of cloud-native software engineering principles.
Responsibilities
Develop backend services using Java and Spring Boot
Design and implement solutions deployed on Google Cloud Platform (GKE)
Work with distributed systems, including Google Cloud Spanner (Postgres dialect) and Confluent Kafka (or similar pub/sub tools)
Design, optimize, and troubleshoot complex SQL queries and stored procedures (e.g., PL/SQL) to support high-performance data operations and ensure data integrity across applications.
Collaborate with teams to implement CI/CD pipelines using GitHub Actions and Argo CD
Ensure high performance and reliability through sound software engineering practices
Mentor and provide technical leadership to the frontend engineering team
Required Qualifications
7+ years' experience in software engineering from ideation to production deployment of IT solutions
5+ years' experience in full software development life cycle including ideation, coding, coding standards, testing, code reviews and production deployments
5+ years of experience with backend Java , Spring Boot and Microservices
3+ years of hands-on experience with a public cloud provider
3+ years working with pub/sub tools like Kafka or similar
3+ years of experience with database design/development (Postgres or similar)
2+ years of experience with CI/CD tools (GitHub Actions, Jenkins, Argo CD, or similar)
Preferred Qualifications
Demonstrated experience with development and deployment of Minimum Viable Products (MVPs)
Must demonstrate innovative mindset, divergent thinking, and convergent actions.
Familiarity with Kubernetes concepts; experience deploying services on GKE is a plus
Compensation, Benefits and Duration
Minimum Compensation: USD 44,000
Maximum Compensation: USD 154,000
Compensation is based on actual experience and qualifications of the candidate. The above is a reasonable and a good faith estimate for the role.
Medical, vision, and dental benefits, 401k retirement plan, variable pay/incentives, paid time off, and paid holidays are available for full time employees.
This position is available for independent contractors
No applications will be considered if received more than 120 days after the date of this post
Auto-ApplyDatabase Developer 1 (Remote)
Remote job
Prepares, defines, structures, develops, implements, and maintains database objects. Analyze query performance, identify bottlenecks, and implement optimization techniques. Defines and implements interfaces to ensure that various applications and user-installed or vendor-developed systems interact with the required database systems.
Creates database structures, writing and testing SQL queries, and optimizing database performance.
Plans and develops test data to validate new or modified database applications.
Work with business analysts, and other stakeholders to understand requirements and integrate database solutions.
Build and implement database systems that meet specific business requirements ensuring data integrity and security, as well as troubleshooting and resolving database issues.
Design and implement ETL pipelines to integrate data from various sources using SSIS.
Responsible for various SQL jobs.
Skills Required
Strong understanding of SQL and DBMS like MySQL, PostgreSQL, or Oracle.
Ability to design and model relational databases effectively.
Skills in writing and optimizing SQL queries for performance.
Ability to troubleshoot and resolve database-related issues.
Ability to communicate technical information clearly and concisely to both technical and non-technical audiences.
Ability to collaborate effectively with other developers and stakeholders.
Strong ETL experience specifically with SSIS.
Skills Preferred
Azure experience is a plus
.Net experience is a plus
GITHub experience is a plus
Experience Required
2 years of progressively responsible programming experience or an equivalent combination of training and experience.
Education Required
Bachelor`s degree in Information Technology or Computer Science or equivalent experience
PostgreSQL Database Developer
Remote job
PostgreSQL Database DeveloperEmployment Type: Full Time, Experienced level Department: Information Technology CGS is seeking a PostgreSQL Database Developer to join our team supporting a rapidly growing Data Analytics and Business Intelligence platform focused on providing data solutions that empower our federal customers. You will support a migration from the current Oracle database to a Postgres database and manage the database environments proactively. As we continue our growth, you will play a key role in ensuring scalability of our data systems.
CGS brings motivated, highly skilled, and creative people together to solve the government's most dynamic problems with cutting-edge technology. To carry out our mission, we are seeking candidates who are excited to contribute to government innovation, appreciate collaboration, and can anticipate the needs of others. Here at CGS, we offer an environment in which our employees feel supported, and we encourage professional growth through various learning opportunities.
Skills and attributes for success:- Drive efforts to migrate from the current Oracle database to the new Microsoft Azure Postgres database- Create and maintain technical documentation, using defined technical documentation templates, as well as gain an in-depth knowledge of the business data to propose and implement effective solutions- Collaborate with internal and external parties to transform high-level technical objectives into comprehensive technical requirements- Ensure the availability and performance of the databases that support our systems, ensuring that they have sufficient resources allocated to support high resilience and speed.- Perform and assist developers in performance tuning- Proactively monitor the database systems to ensure secure services with minimum downtime and improve maintenance of the databases to include rollouts, patching, and upgrades- Create and maintain technical documentation using defined technical documentation templates, as well as gaining an in-depth knowledge of the business data to propose and implement effective solutions- Work within a structured and Agile development approach
Qualifications:- Bachelor's degree- Must be US Citizenship- 7 years of experience with administrating PostgreSQL Databases in Linux environments- Experience with setting up, monitoring, and maintaining PostgreSQL instances- Experience with implementing and maintaining PostgreSQL backup and disaster recovery processes- Experience migrating Oracle schema, packages, views, triggers to Postgres using Ora2Pg tool
Ideally, you will also have:- Experience implementing and maintaining data warehouses- Experience with AWS RDS for PostgreSQL- Experience with Oracle databases- Experience leveraging the Ora2Pg tool- Experience with working in cloud environments such as Azure and/or AWS- Prior federal consulting experience
Our Commitment:Contact Government Services (CGS) strives to simplify and enhance government bureaucracy through the optimization of human, technical, and financial resources. We combine cutting-edge technology with world-class personnel to deliver customized solutions that fit our client's specific needs. We are committed to solving the most challenging and dynamic problems.
For the past seven years, we've been growing our government-contracting portfolio, and along the way, we've created valuable partnerships by demonstrating a commitment to honesty, professionalism, and quality work.
Here at CGS we value honesty through hard work and self-awareness, professionalism in all we do, and to deliver the best quality to our consumers mending those relations for years to come.
We care about our employees. Therefore, we offer a comprehensive benefits package.- Health, Dental, and Vision- Life Insurance- 401k- Flexible Spending Account (Health, Dependent Care, and Commuter)- Paid Time Off and Observance of State/Federal Holidays
Contact Government Services, LLC is an Equal Opportunity Employer. Applicants will be considered without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran.
Join our team and become part of government innovation!
Explore additional job opportunities with CGS on our Job Board:*************************************
For more information about CGS please visit: ************************** or contact:Email: *******************
#CJ
Auto-ApplyCloud Database Developer (AWS)
Remote job
Capstone Integrated Solutions
is a comprehensive services provider. Our team consists of outstanding professionals, highly experienced in designing, building, and supporting retail software. We see ourselves as a build-as-a-service provider who follows a repeatable business pattern that can be applied to a variety of platforms and verticals. Having a culture built on outcomes and delivery at the core of the business, Capstone is providing its customers with a complete suite of services for software development, system analysis, integration, implementation, and support, as well as the option to engage a single team to perform all the services they require.
Who You Are and What You'll Do:
Capstone Integrated Solutions
is looking for a highly motivated and talented Cloud Database Developer specializing in AWS Database development to join our growing team. This is an exciting opportunity for a driven passionate about creating cutting-edge solutions for the energy industry.
Responsibilities:
Collaborate with cross-functional teams to design, develop, and maintain software solutions.
Work on AWS database and API development tasks.
Assist in the integration of the developed solutions with existing systems and databases.
Participate in the entire software development lifecycle, from planning and design to deployment and maintenance.
Qualifications:
Bachelor's degree in computer science or a related field.
Solid understanding of AWS cloud database development fundamentals, SQL, GraphQL, API Gateway, RDS, Oracle, PostgreSQL, DynamoDB, and other AWS services as required.
Eagerness to learn and adapt to new technologies and frameworks.
Strong problem-solving and analytical skills.
Excellent communication and collaboration abilities.
Nice to Have:
Basic understanding of programming technologies such as TypeScript and Python.
Experience with version control systems (e.g., Git).
“Our Culture”:
At Capstone, the central principles that we all adhere to, and the glue that holds us together, are our keystones. Our four keystones are:
"A Customer Obsessed, Delivery Focused, Culture"
We're driven to exceed our customers' expectations by listening, leading, solving problems, and delivering what we promise
We aim to be the most dependable and trusted partner serving our customers. TRUST = CONSISTENCY x TIME
"A Culture of Learning and Sharing"
We value “Lifetime Learners”; those who are hungry, competitive, curious, and self-motivated in their pursuit of knowledge.
Personal and professional growth depends on teamwork and continuous learning. By sharing knowledge, skills, ideas, and effort, we benefit our customers, ourselves, and our communities.
We recognize that the thoughts, feelings, and backgrounds of others are as important as our own. Everyone has something to learn and everyone has something they can teach.
Knowledge and ability are valued. Sharing knowledge and helping others learn new capabilities is valued exponentially.
"A Culture of Growth and Scalability"
Growth comes from not establishing barriers in your role. “Cross functional skill sets are valued and help us deliver to our customers in a truly agile fashion. It comes with understanding that when asked to do something new, you will need support, have questions, and make some mistakes along the way.
The most elegant solution is a simple solution. Simple doesn't mean easy. It's often more difficult to break a complex problem down into simple, scalable terms. We don't appreciate, or value, over architected solutions or superfluous coding.
Time is one of our most precious commodities. Scalability implies being respectful of this and passionate about making the most efficient use of each and every one of our team members time.
"All Work is Strategic"
No matter how small a project or assignment appears, every single engagement is an opportunity for us to prove ourselves, build trust, and develop relationships that last and grow
Every task, interaction, and commitment matters
Big or small, we execute our plans and strategies with focus, commitment, and passion
We offer:
Job Type: Full-time
Short-term contract
Benefits:
Remote work
Capstone Integrated Solutions
is an equal opportunity employer. We embrace and celebrate diversity and are committed to creating an inclusive and safe environment for all employees. Experience comes in many forms, and we're dedicated to adding new perspectives to the team. We encourage you to apply even if your experience doesn't perfectly align with what we have listed. We look forward to hearing from you.
No Agencies Please!
Auto-ApplyData Engineer (Remote USA) G10
Remote job
The application window is expected to close on: 12/25/25 Job posting may be removed earlier if the position is filled or if a sufficient number of applications are received. Meet the Team Join the Cisco IT Data team, where innovation, automation, and reliability drive world-class business outcomes. Our team delivers scalable, secure, and high-performance platforms supporting Cisco's global data operations. We value a culture of continuous improvement, collaboration, and technical excellence, empowering team members to experiment and drive operational transformation.
Your Impact
As a Data Operations (DevOps) Engineer, you will play a meaningful role in building, automating, and optimizing the infrastructure and processes that support the Corporate Functions - Enterprise Data Warehouse. Your expertise will ensure the reliability, scalability, and security of data platforms and pipelines across cloud and on-premise environments. You'll collaborate closely with data engineers, software engineers, architects, and business partners to create robust solutions that accelerate data-driven decision-making at Cisco.
Key Responsibilities
* Automate deployment, monitoring, and management of data platforms and pipelines using industry-standard DevOps tools and standard processes.
* Design, implement, and maintain CI/CD pipelines for ETL, analytics, and data applications (e.g., Informatica, DBT, Airflow, Python, Java).
* Ensure high availability, performance, and security of data systems in cloud (Snowflake, Google BigQuery, AWS/GCP/Azure) and hybrid environments.
* Lead infrastructure as code (Terraform, CloudFormation, or similar) to provision and scale resources efficiently.
* Implement observability and data quality monitoring using modern tools (e.g., Monte Carlo, Prometheus, Grafana, ELK).
* Solve and resolve issues in production data & workflows, collaborating with engineering and analytics teams for root cause analysis and solution delivery.
* Drive automation and process improvement for data operations, system upgrades, patching, and access management.
* Contribute to security and compliance initiatives related to data governance, access controls, and audit readiness.
* Mentor and support junior engineers, encouraging a culture of knowledge sharing and operational excellence.
Minimum Qualifications
* Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
* 5-8 years of experience in DevOps, Data Operations, or related IT engineering roles.
*5-8 years of experience Proficiency with cloud platforms (Snowflake, AWS) and Working knowledge of ETL and workflow orchestration tools (Informatica, DBT, Airflow).
*5-8 years of experience Hands-on experience with CI/CD tools (Jenkins, GitLab CI, etc.), scripting (Python, Shell), and configuration management.
* Working knowledge of ETL and workflow orchestration tools (Informatica, DBT, Airflow).
* Familiarity with infrastructure as code (Terraform, CloudFormation, etc.).
* 5+ years of experience with monitoring, logging, and alerting solutions (Prometheus, Grafana, ELK, Monte Carlo, etc.).
*5-8 years of experience with containerization and orchestration (Docker, Kubernetes).
* 5+ years of experience with Strong troubleshooting, incident management, and problem-solving skills.
* Experience working in Agile/Scrum teams and delivering in fast-paced environments.
Preferred Qualifications
* Experience supporting data warehouse or analytics platforms in enterprise settings.
* Knowledge of data quality, security, and governance frameworks.
* Familiarity with automation tools and standard methodologies for operational efficiency.
* Understanding of data pipelines, modeling, and analytics.
* Excellent communication, collaboration, and documentation skills.
**Why Cisco?**
At Cisco, we're revolutionizing how data and infrastructure connect and protect organizations in the AI era - and beyond. We've been innovating fearlessly for 40 years to create solutions that power how humans and technology work together across the physical and digital worlds. These solutions provide customers with unparalleled security, visibility, and insights across the entire digital footprint.
Fueled by the depth and breadth of our technology, we experiment and create meaningful solutions. Add to that our worldwide network of doers and experts, and you'll see that the opportunities to grow and build are limitless. We work as a team, collaborating with empathy to make really big things happen on a global scale. Because our solutions are everywhere, our impact is everywhere.
We are Cisco, and our power starts with you.
**Message to applicants applying to work in the U.S. and/or Canada:**
The starting salary range posted for this position is $165,000.00 to $241,400.00 and reflects the projected salary range for new hires in this position in U.S. and/or Canada locations, not including incentive compensation*, equity, or benefits.
Individual pay is determined by the candidate's hiring location, market conditions, job-related skillset, experience, qualifications, education, certifications, and/or training. The full salary range for certain locations is listed below. For locations not listed below, the recruiter can share more details about compensation for the role in your location during the hiring process.
U.S. employees are offered benefits, subject to Cisco's plan eligibility rules, which include medical, dental and vision insurance, a 401(k) plan with a Cisco matching contribution, paid parental leave, short and long-term disability coverage, and basic life insurance. Please see the Cisco careers site to discover more benefits and perks. Employees may be eligible to receive grants of Cisco restricted stock units, which vest following continued employment with Cisco for defined periods of time.
U.S. employees are eligible for paid time away as described below, subject to Cisco's policies:
+ 10 paid holidays per full calendar year, plus 1 floating holiday for non-exempt employees
+ 1 paid day off for employee's birthday, paid year-end holiday shutdown, and 4 paid days off for personal wellness determined by Cisco
+ Non-exempt employees** receive 16 days of paid vacation time per full calendar year, accrued at rate of 4.92 hours per pay period for full-time employees
+ Exempt employees participate in Cisco's flexible vacation time off program, which has no defined limit on how much vacation time eligible employees may use (subject to availability and some business limitations)
+ 80 hours of sick time off provided on hire date and each January 1st thereafter, and up to 80 hours of unused sick time carried forward from one calendar year to the next
+ Additional paid time away may be requested to deal with critical or emergency issues for family members
+ Optional 10 paid days per full calendar year to volunteer
For non-sales roles, employees are also eligible to earn annual bonuses subject to Cisco's policies.
Employees on sales plans earn performance-based incentive pay on top of their base salary, which is split between quota and non-quota components, subject to the applicable Cisco plan. For quota-based incentive pay, Cisco typically pays as follows:
+ .75% of incentive target for each 1% of revenue attainment up to 50% of quota;
+ 1.5% of incentive target for each 1% of attainment between 50% and 75%;
+ 1% of incentive target for each 1% of attainment between 75% and 100%; and
+ Once performance exceeds 100% attainment, incentive rates are at or above 1% for each 1% of attainment with no cap on incentive compensation.
For non-quota-based sales performance elements such as strategic sales objectives, Cisco may pay 0% up to 125% of target. Cisco sales plans do not have a minimum threshold of performance for sales incentive compensation to be paid.
The applicable full salary ranges for this position, by specific state, are listed below:
New York City Metro Area:
$165,000.00 - $277,600.00
Non-Metro New York state & Washington state:
$146,700.00 - $247,000.00
* For quota-based sales roles on Cisco's sales plan, the ranges provided in this posting include base pay and sales target incentive compensation combined.
** Employees in Illinois, whether exempt or non-exempt, will participate in a unique time off program to meet local requirements.
Cisco is an Affirmative Action and Equal Opportunity Employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, gender, sexual orientation, national origin, genetic information, age, disability, veteran status, or any other legally protected basis.
Cisco will consider for employment, on a case by case basis, qualified applicants with arrest and conviction records.
Lead Data Engineer & Modeler, AI - Hybrid
Remote job
Welcome to the Agentic Commerce Era
At Commerce, our mission is to empower businesses to innovate, grow, and thrive with our open, AI-driven commerce ecosystem. As the parent company of BigCommerce, Feedonomics, and Makeswift, we connect the tools and systems that power growth, enabling businesses to unlock the full potential of their data, deliver seamless and personalized experiences across every channel, and adapt swiftly to an ever-changing market. Simply said, we help businesses confidently solve complex commerce challenges so they can build smarter, adapt faster, and grow on their own terms. If you want to be part of a team of bold builders, sharp thinkers, and technical trailblazers, working together to shape the future of commerce, this is the place for you.
BigCommerce is building the foundation for the next generation of AI-driven commerce. As a Lead AI Engineer, Platform & Infrastructure, you'll define and scale the systems that make this transformation possible. This role sits at the intersection of data engineering, MLOps, and applied AI enablement, responsible for building the secure, scalable, and high-performance infrastructure that supports AI/ML use cases across the company.
You'll collaborate across product, engineering, and data teams to design the unified AI platform layer - powering internal intelligence, customer-facing AI features, and advanced analytics. From model lifecycle management to data pipelines and inference infrastructure, you'll drive the architecture and operational excellence that allows BigCommerce to experiment, deploy, and iterate AI at scale.
If you're passionate about enabling intelligence through infrastructure, designing modern ML ecosystems, and operationalizing AI across a fast-scaling SaaS platform, this role will put you at the center of BigCommerce's AI evolution.
What You'll Do
AI Platform Architecture
Partner with the Enterprise Architect and Principal Data Architect to design the company-wide AI/ML platform strategy across GCP and AWS.
Build scalable systems for model training, evaluation, deployment, and monitoring.
Define best practices for data ingestion, feature stores, vector databases, and model registries.
Integrate AI workflows into existing analytics and product pipelines.
Infrastructure & Reliability
Implement CI/CD for ML pipelines (MLOps) including model versioning, validation, and automated deployment.
Ensure platform reliability, observability, and performance at enterprise scale.
Manage GPU/TPU resources and optimize compute efficiency for training and inference workloads.
Contribute to cost-optimization and security best practices across the AI infrastructure.
Cross-Functional Collaboration
Partner with data scientists, applied ML engineers, and product teams to translate model requirements into scalable architecture.
Work closely with the data engineering team to ensure AI pipelines align with governance and data quality standards.
Collaborate with software engineers to integrate AI services and APIs into production systems.
Governance & Responsible AI
Champion data and model governance, including lineage, reproducibility, and compliance (GDPR, SOC, ISO).
Establish monitoring frameworks for model drift, bias detection, and ethical AI use.
Build secure and transparent systems that support trust in AI-driven decisions.
What You'll Bring
7+ years in data or ML engineering, with experience designing production-grade AI infrastructure.
Strong technical foundation in MLOps, data pipelines, and distributed systems.
Hands-on experience with:
Cloud AI platforms (Vertex AI, SageMaker, Bedrock, or equivalent)
Orchestration frameworks (Airflow, Kubeflow, MLflow, or Metaflow)
Cloud data stacks (BigQuery, Snowflake, GCS/S3, Terraform)
Model serving tools (FastAPI, BentoML, Ray Serve, or Triton Inference Server)
Proficient in: Python, SQL, and Git-based CI/CD.
Experience integrating LLMs and vector databases (e.g., Pinecone, FAISS, Weaviate, Vertex Matching Engine).
Familiarity with Kubernetes, Docker, and Terraform for scalable deployment.
Strong communication skills, able to partner across disciplines and simplify complex technical systems.
What You'll Impact
The AI foundation powering every intelligent capability within BigCommerce - from predictive analytics to generative assistants.
The tools and frameworks that enable product and engineering teams to build, test, and ship AI faster.
The reliability, governance, and scalability of BigCommerce's enterprise-wide AI ecosystem.
Why Join Us
You'll play a critical role in shaping how BigCommerce operationalizes AI - not just as a feature, but as a platform capability. You'll join a collaborative, ambitious, and fast-evolving data organization dedicated to creating systems that enable intelligence at scale.
#LI-GL1
#LI-HYBRID
(Pay Transparency Range: $116,000-$174,000)
The exact salary will be dependent on the successful candidate's location, relevant knowledge, skills, and qualifications.
Inclusion and Belonging
At Commerce, we believe that celebrating the unique histories, perspectives and abilities of every employee makes a difference for our company, our customers and our community. We are an equal opportunity employer and the inclusive atmosphere we build together will make room for every person to contribute, grow and thrive.
We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the interview process, to perform essential job functions and to receive other benefits and privileges of employment. If you need an accommodation in order to interview at Commerce, please let us know during any of your interactions with our recruiting team.
Learn more about the Commerce team, culture and benefits at *********************************
Protect Yourself Against Hiring Scams: Our Corporate Disclaimer
Commerce, along with many other employers, has become the subject of fraudulent job offers to hopeful prospective job seekers.
Be advised:
Commerce does not offer jobs to individuals who do not go through our formal hiring process.
Commerce will never:
require payment of recruitment fees from candidates;
request personally identifiable information through unsanctioned websites or applications;
attempt to solicit money from you as part of the hiring process or as part of an employment offer;
solicit money to complete visa requirements as part of a job offer.
If you receive unsolicited offers of employment from Commerce, we urge you to be extremely cautious and avoid engaging or responding.
Auto-ApplyData Engineer (Mid level)
Remote job
We're looking for a mid-level Data Engineer, in Eastern timezone (United States) with experience in financial services, crypto, or blockchain data to join our engineering team. You'll help expand our in-house data capabilities and design pipelines that can handle the unique challenges of high-volume, high-integrity financial data.
About Thesis
Thesis* is a pioneering venture studio dedicated to building on Bitcoin since 2014. We seek, fund, and build products and protocols in cryptocurrency and decentralized businesses that enable personal empowerment.
Our projects include Mezo, a Bitcoin finance app; Keep Network (now Threshold Network), a privacy protocol for public blockchains; Fold (NASDAQ:FLD), for earning Bitcoin on your purchases; Taho, a community-owned and operated cryptocurrency wallet; Lolli, an app providing Bitcoin rewards for purchases, gaming, and other online commerce; and Embody, a fully encrypted period tracking app.
Thesis* continues to challenge traditional systems, driven by innovation and a belief in a sovereign digital future shaping the decentralized landscape one project at a time. To learn more, please visit: ******************
Investors in the company and our projects include Andreessen Horowitz, Pantera, Multicoin, Polychain Capital, and Draper Associates, among others. We are a remote-first company, led by founders who have been operating in the cryptocurrency and web3 space since 2014.
About Mezo
Mezo is Bitcoins' Economic Layer; a new home for Bitcoin holders to cultivate Bitcoin and grow wealth together. It is a Bitcoin-first chain designed for user ownership of assets, reliable bridging with tBTC, a dual staking model for rewards and validation, and much more.
Mezo is proudly brought to you by Thesis, the same team behind tBTC, Fold, Acre, Etcher, Taho, Embody, and Defense. Thesis is a cryptocurrency venture studio whose mission is to empower the individual. We seek, fund, and build brands in cryptocurrency and decentralized businesses that enable personal empowerment. We're a fun, down-to-earth, fast-paced, highly collaborative, and fully remote team!
Investors in Thesis and our projects include Andreessen Horowitz, Polychain Capital, Pantera Capital, and Draper Associates, among others. We are a remote-first company, led by founders who have been operating in the cryptocurrency and web3 space since 2014.
About the Data Engineer
At Mezo, we're building the Bitcoin bank - a financial system where people can
bank on themselves
. To get there, we need world-class data infrastructure powering everything from on-chain analytics and user insights to credit risk modeling and stablecoin liquidity.
We're looking for a mid-level Data Engineer with experience in financial services, crypto, or blockchain data to join our engineering team. You'll be based in the United States (NYC) and you'll help expand our in-house data capabilities and design pipelines that can handle the unique challenges of high-volume, high-integrity financial data.
What You'll Do
Architect complex, real-time data pipelines:
Design, develop, and optimize ETL pipelines that integrate large data sets from both off and on-chain sources. Ensure low-latency ingestion and processing of time-sensitive data
Proactively optimize and constantly maintain processes
Act as a key contributor to developing and supporting complex data architectures
Continually troubleshoot and optimize data systems, identifying issues and resolving them
Proactively improve processes and technologies for more efficient data processing and delivery
Ensure data availability, reliability, and performance
Ensure data integrity, consistency, and security across systems
Collaborate with Data Science:
Work with the Data Scientist to write and code review Python scripts for data ingestion, transformation, and automation
Implement and manage data workflows using Cloud Composer and Github Actions for scheduling and orchestration based on Data Science specifications
Build and maintain high-performance data warehouse schema with Google BigQuery and DBT for data transformation, mapped to the needs of the Data Scientist
Work closely with on-chain data:
Build data validation, reconciliation, and monitoring systems that meet the standards of both financial services and crypto-native ecosystems.
Explore new approaches to indexing and querying Bitcoin and Ethereum data, as well as emerging L2s and DeFi protocols.
Collaborate with cross-functional teams:
Partner with product, engineering, and data science to deliver the datasets that drive lending models, stablecoin flows, and new product launches.
Requirements
3-6 years in a data engineering role, with at least some experience in DeFi, fintech, or a related field
Extensive experience with Python and SQL
Experience with data warehousing solutions (Snowflake, BigQuery, Redshift)
Strong understanding of Google Tag Manager (familiarity with Data Layer a plus)
Expertise with orchestration tooling like Fivetran or Airflow, data transformation tools like dbt, and git/Github
Comprehensive understanding of the Google Cloud Platform, including Cloud SQL, Cloud Functions, and BigQuery
Familiarity with data governance and compliance standards
Hands-on experience with blockchain or crypto data, including core tools like Dune or Goldsky
Knowledge of standard ETL patterns, modern data warehousing ideas such as data mesh or data vaulting, and data quality practices
Preferred Qualifications
Knowledge of real-time data processing and event-driven tracking with analytics.js and/or Segment
Familiarity with data observability tools and anomaly detection for production systems
Understanding of financial data governance, reconciliation, and compliance needs.
Experience with on-chain indexing, blockchain ETL, or real-time risk/credit models.
Exposure to data visualization platforms such as Looker Studio, Hex, or Mixpanel
Prior experience as a data analyst or scientist
Location
Remote in the U.S. - Eastern timezone
Salary
We offer competitive salaries, variable with experience and a number of other factors.
Benefits
At Thesis, we work in a fun, fast-paced environment that operates by collaborating both remotely and in person when we can. We offer a competitive salary, full health benefits, opportunity for equity and a number of other perks.
Our Cultural Tenets
We Believe in Freedom and Autonomy
We Have Inquisitive Minds
We Are Obsessed with Communication
We Are Proudly Offbeat
We Care About Each Other
We Are Driven
Equal Opportunity Statement
Thesis is committed to building a diverse and inclusive team. We welcome applications from candidates of all backgrounds and do not discriminate based on race, religion, national origin, gender, sexual orientation, age, veteran status, or disability status.
Auto-ApplyPrincipal Data Engineer - ML Platforms
Remote job
Altarum | Data & AI Center of Excellence (CoE) Altarum is building the future of data and AI infrastructure for public health - and we're looking for a Principal Data Engineer - ML Platforms to help lead the way. In this cornerstone role, you will design, build, and operationalize the modern data and ML platform capabilities that power analytics, evaluation, AI modeling, and interoperability across all Altarum divisions.
If you want to architect impactful systems, enable data science at scale, and help ensure public health and Medicaid programs operate with secure, explainable, and trustworthy AI - this role is for you. What You'll Work On
This role blends deep engineering with applied ML enablement: ML Platform Engineering: modern lakehouse architecture, pipelines, MLOps lifecycle Applied ML enablement: risk scoring, forecasting, Medicaid analytics NLP/Generative AI support: RAG, vectorization, health communications Causal ML operationalization: evaluation modeling workflows Responsible/Trusted AI engineering: model cards, fairness, compliance Your work ensures that Altarum's public health and Medicaid programs run on secure, scalable, reusable, and explainable data and AI infrastructure. What You'll Do
Platform Architecture & Delivery
Design and operate modern, cloud-agnostic lakehouse architecture using object storage, SQL/ELT engines, and dbt.
Build CI/CD pipelines for data, dbt, and model delivery (GitHub Actions, GitLab, Azure DevOps).
Implement MLOps systems: MLflow (or equivalent), feature stores, model registry, drift detection, automated testing.
Engineer solutions in AWS and AWS GovCloud today, with portability to Azure Gov or GCP.
Use Infrastructure-as-Code (Terraform, CloudFormation, Bicep) to automate secure deployments.
Pipelines & Interoperability
Build scalable ingestion and normalization pipelines for healthcare and public health datasets, including:
FHIR R4 / US Core (strongly preferred)
HL7 v2 (strongly preferred)
Medicaid/Medicare claims & encounters (strongly preferred)
SDOH & geospatial data (preferred)
Survey, mixed-methods, and qualitative data
Create reusable connectors, dbt packages, and data contracts for cross-division use.
Publish clean, conformed, metrics-ready tables for Analytics Engineering and BI teams.
Support Population Health in turning evaluation and statistical models into pipelines.
Data Quality, Reliability & Cost Management
Define SLOs and alerting; instrument lineage & metadata; ensure ≥95% of data tests pass.
Perform performance and cost tuning (partitioning, storage tiers, autoscaling) with guardrails and dashboards.
Applied ML Enablement
Build production-grade pipelines for risk prediction, forecasting, cost/utilization models, and burden estimation.
Develop ML-ready feature engineering workflows and support time-series/outbreak detection models.
Integrate ML assets into standardized deployment workflows.
Generative AI Enablement
Build ingestion and vectorization pipelines for surveys, interviews, and unstructured text.
Support RAG systems for synthesis, evaluation, and public health guidance.
Enable Palladian Partners with secure, controlled-generation environments.
Causal ML & Evaluation Engineering
Translate R/Stata/SAS evaluation code into reusable pipelines.
Build templates for causal inference workflows (DID, AIPW, CEM, synthetic controls).
Support operationalization of ARA's applied research methods at scale.
Responsible AI, Security & Compliance
Implement Model Card Protocol (MCP) and fairness/explainability tooling (SHAP, LIME).
Ensure compliance with HIPAA, 42 CFR Part 2, IRB/DUA constraints, and NIST AI RMF standards.
Enforce privacy-by-design: tokenization, encryption, least-privilege IAM, and VPC isolation.
Reuse, Shared-Services, and Enablement
Develop runbooks, architecture diagrams, repo templates, and accelerator code.
Pair with data scientists, analysts, and SMEs to build organizational capability.
Provide technical guidance for proposals and client engagements.
Your First 90 Days - You will make a meaningful impact fast. Expected outcomes include:
Platform skeleton operational: repo templates, CI/CD, dbt project, MLflow registry, tests.
Two pipelines in production (e.g., FHIR → analytics and claims normalization).
One end-to-end CoE lighthouse MVP delivered (ingestion → model → metrics → BI).
Completed playbooks for GovCloud deployment, identity/secrets, rollback, and cost control.
Success Metrics (KPIs)
Pipeline reliability meeting SLA/SLO targets.
≥95% data tests passing across pipelines.
MVP dataset onboarding ≤ 4 weeks.
Reuse of platform assets across ≥3 divisions.
Cost optimization and budget adherence.
What You'll Bring
7-10+ years in data engineering, ML platform engineering, or cloud data architecture.
Expert in Python, SQL, dbt, and orchestration tools (Airflow, Glue, Step Functions).
Deep experience with AWS + AWS GovCloud.
CI/CD and IaC experience (Terraform, CloudFormation).
Familiarity with MLOps tools (MLflow, Sagemaker, Azure ML, Vertex AI).
Ability to operate in regulated environments (HIPAA, 42 CFR Part 2, IRB).
Preferred:
Experience with FHIR, HL7, Medicaid/Medicare claims, and/or SDOH datasets.
Databricks, Snowflake, Redshift, Synapse.
Event streaming (Kafka, Kinesis, Event Hubs).
Feature store experience.
Observability tooling (Grafana, Prometheus, OpenTelemetry).
Experience optimizing BI datasets for Power BI.
Logistical Requirements
At this time, we will only accept candidates who are presently eligible to work in the United States and will not require sponsorship.
Our organization requires that all work, for the duration of your employment, must be completed in the continental U.S. unless required by contract.
If you're near one of our offices (Arlington, VA; Silver Spring, MD; or Novi, MI), you'll join us in person one day every other month (6 times per year) for a fun, purpose-driven Collaboration Day. These days are filled with creative energy, meaningful connection, and team brainstorming!
Must be able to work during Eastern Time unless approved by your manager.
Employees working remotely must have a dedicated, ergonomically appropriate workspace free from distractions with a mobile device that allows for productive and efficient conduct of business.
Altarum is a nonprofit organization focused on improving the health of individuals with fewer financial resources and populations disenfranchised by the health care system. We work primarily on behalf of federal and state governments to design and implement solutions that achieve measurable results. We combine our expertise in public health and health care delivery with technology development and implementation, practice transformation, training and technical assistance, quality improvement, data analytics, and applied research and evaluation. Our innovative solutions and proven processes lead to better value and health for all.
Altarum is an equal opportunity employer that provides employment and opportunities to all qualified employees and applicants without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, or any other characteristic protected by applicable law.
Auto-ApplyAnalytics Data science and IOT Engineer
Remote job
Role
Analytics
Data
science
and
IOT
Engineer
Responsibilities
Understanding
the
requirement
and
ability
to
relate
to
statistical
algorithms
Knowing
the
Acceptance
Criteria
and
ways
to
achieve
the
same
Complete
understanding
of
Business
Processes
and
data
Performing
EDA
(Exploratory
Data
Analysis)
cleansing, data preprocessing data munging and create training data sets Using the right Statistical models and other statistical methods Deploying the statistical model using the technology of customers' preference Building Data Pipeline , Machine Learning Pipeline and Monitoring activities are set for Continuous Integration , Continuous Development and Continuous Testing Investigating Statistical Model & provide resolution when there is any data drift and performance issues The Role offers Opportunity to join a global team to do meaningful work that contributes to global strategy and individual development To re-imagine, redesign, and apply technology to add value to the business and operations
Auto-Apply