Engineering SME
Requirements engineer job at Amyx
Amyx is looking to hire an Engineering SME to join our DLA contract.
Responsibilities
Provides technical knowledge and analysis of highly specialized applications and operational environment, high-level functional systems analysis, design, integration, documentation, and implementation advice on exceptionally complex problems that necessitate high-level knowledge of the subject matter for effective implementation.
Participates as needed in all phases of software development with emphasis on the planning, analysis, modeling, simulation, testing, integration, documentation, and presentation phases.
Must have the ability to communicate accurate information
Qualifications
Required:
Five (5) years relevant experience
Must possess IT-I Critical Sensitive security clearance or Tier 5 (T5) at time of proposal
submission.
Salary: 100-180k
Benefits include:
Medical, Dental, and Vision Plans (PPO & HSA options available)
Flexible Spending Accounts (Health Care & Dependent Care FSA)
Health Savings Account (HSA)
401(k) with matching contributions
Roth
Qualified Transportation Expense with matching contributions
Short Term Disability
Long Term Disability
Life and Accidental Death & Dismemberment
Basic & Voluntary Life Insurance
Wellness Program
PTO
11 Holidays
Professional Development Reimbursement
Please contact *************** with any questions!
Amyx is proud to be an Equal Opportunity Employer. All qualified candidates will be considered without regard to race, color, religion, national origin, age, disability, sexual orientation, gender identity, status as a protected veteran, or any other characteristic protected by law. Amyx is a VEVRAA federal contractor and we request priority referral of veterans.
Physical Demands
Employee needs to be able to sit at a workstation for extended periods; use hand(s) to handle or feel objects, tools, or controls; reach with hands and arms; talk and hear. Most positions require ability to work on desktop or laptop computer for extended periods of time reading, reviewing/analyzing information, and providing recommendations, summaries and/or reports in written format. Must be able to effectively communicate with others verbally and in writing. Employee may be required to occasionally lift and/or move moderate amounts of weight, typically less than 20 pounds. Regular and predictable attendance is essential.
Auto-ApplyAI Engineer
San Francisco, CA jobs
Our client, a San Francisco-based ecommerce startup led by the founder of multiple highly-successful Bay Area technology companies, is seeking an AI Engineer to join their rapidly growing team. They are seeking an engineer to own key learning systems that directly impact the user experience and revenue. The role will involve developing and deploying ML models for product personalization and recommendations.
Term: Full Time
Location: San Francisco (onsite, commutable by train)
Compensation: $180,000 - $220,000 base salary, full benefits and Equity
Desired Qualifications:
5+ years of experience in machine learning or data science roles
Strong background in ML algorithms, modeling, and deployment practices
Experience with Python, and ML tools such as PyTorch, TensorFlow, XGBoost, or scikit-learn
Experience with recommendation systems, pricing algorithms, or ranking models is preferred
Comfortable working in a fast-paced startup environment
Platform Engineer
Richmond, VA jobs
We are seeking a highly skilled and experienced Software Systems Engineer who will be at the forefront of building our automation platform ecosystem - transforming the way we deliver IT infrastructure and services.
The successful candidate will be responsible for designing, implementing, and maintaining
our automation and orchestration platforms, ensuring their optimal performance, scalability, and reliability in a dynamic and fast-paced environment.
The candidate will also be a member of a larger platform team and will assist with managing and troubleshooting infrastructure issues related to server OS, virtualization, and container orchestration platforms.
This role is ideal for someone who thrives on building systems from the ground up, enjoys solving complex operational challenges, and has a passion for enabling others through automation.
Key Responsibilities
• Design, deploy, and administer automation platforms such as but not limited to Terraform Enterprise, Ansible Automation Platform, Vault, and Packer.
• Collaborate with development, operations, security, and COE teams to ensure seamless integration and secure & consistent automation practices.
• Establish and develop operational standards, documentation, and lifecycle management processes.
• Integrate self-service, CMDB, platform security, secrets management, observability, and other solutions.
• Monitor system performance, troubleshoot issues, and optimize the platform for high availability and resilience.
• Implement and manage CI/CD pipelines and GitOps workflows using tools such as GitLab, Jenkins, etc.
• Provide guidance and training to other engineers on automation platforms and related technologies and develop related documentation.
• Stay current with industry trends, emerging technologies, and best practices related to automation platforms, VMs, containerization, and cloud-native architectures.
• Provide supplemental VMWare & Kubernetes/Container Support: troubleshooting issues, deployment and configuration, storage and performance monitoring, and performing security updates.
• Participate in a 24/7 on-call rotation and respond to issues with systems and technologies supported by the team.
Required Skills
• Proven expertise in automation platform deployment and administration (Terraform, Ansible, Packer, Vault, etc.).
• Strong understanding of platform automation architecture, components, and ecosystem, including hands-on experience.
• Automation pipeline development and CI/CD integration.
• Scripting and troubleshooting proficiency (Python, PowerShell, Bash, etc.).
• System reliability and observability (Prometheus, Grafana, etc.).
• Security and access management (SSO, RBAC, PKI).
• Strong problem-solving skills, with a proactive and collaborative approach to troubleshooting and issue resolution.
• Background in infrastructure lifecycle management and capacity planning.
• Solid foundation in infrastructure including understanding of database, networking, DNS, load balancing, storage, and backup concepts and solutions.
• Excellent interpersonal, communication, organizational and technical leadership skills.
Required Years of Experience
• Minimum of 5 -8 years of experience in software systems engineering, with a focus on infrastructure engineering, DevOps, or platform operations.
• Minimum of 2 years of hands-on experience administering automation or IaC platforms (Terraform, Ansible, etc.).
Observability Engineer
Greenwood Village, CO jobs
Our client, an industry leader in telecommunications, has an excellent opportunity for an Observability Engineer to work on a contract opportunity. Work will be a hybrid on-site/remote schedule in Englewood, CO. The Observability Engineer will contribute significantly to planning, implementing, and maintaining system monitoring and observability artifacts for a complex enterprise network. Collaborates closely with developers to integrate observability, encompassing APM, NPM, SNMP monitoring, log aggregation, JVM monitoring, and network device monitoring.
Due to client requirement, applicants must be willing and able to work on a w2 basis. For our w2 consultants, we offer a great benefits package that includes Medical, Dental, and Vision benefits, 401k with company matching, and life insurance.
Rate: $50 - $58 / hr. w2
Responsibilities:
This role blends technical proficiency with collaboration, delivering impactful contributions to our systems and infrastructure.
Contribute to revision control using Git, collaborate on network performance monitoring, automated processes through scripting in BASH and Python, and bring expertise in WiFi monitoring and analysis.
Design observability dashboards and alerting for AWS cloud services
Implement WiFi network monitoring serving millions of users
Develop APM, NPM, and SNMP monitoring solutions
Create automated log aggregation and JVM performance monitoring
Collaborate with development teams on observability integration
Automate monitoring workflows using Python and Bash scripting
Maintain monitoring infrastructure using Git version control
Requirements:
Bachelor's degree in Computer Science or equivalent
Experience building dashboards and monitoring solutions w Grafana, Splunk, or Datadog
Basic knowledge in using ticketing and software tools to support current operations.
Basic knowledge of network devices and network appliances.
AWS Cloud Experience
Site Reliability concepts and culture
Please be advised- If anyone reaches out to you about an open position connected with Eliassen Group, please confirm that they have an Eliassen.com email address and never provide personal or financial information to anyone who is not clearly associated with Eliassen Group. If you have any indication of fraudulent activity, please contact ********************.
Skills, experience, and other compensable factors will be considered when determining pay rate. The pay range provided in this posting reflects a W2 hourly rate; other employment options may be available that may result in pay outside of the provided range.
W2 employees of Eliassen Group who are regularly scheduled to work 30 or more hours per week are eligible for the following benefits: medical (choice of 3 plans), dental, vision, pre-tax accounts, other voluntary benefits including life and disability insurance, 401(k) with match, and sick time if required by law in the worked-in state/locality.
JOB ID: JN -112025-104374
MDM Engineer III
Dublin, CA jobs
W2 Contract-to-Hire
Salary Range: $156,000 - $176,800 per year
The MDM Engineer III is responsible for the development, integration, and implementation of configuration processes, procedures, and solutions within the Master Data Management (MDM) platform. The candidate will develop solutions on the MDM platform, participate in system/design/code reviews, address configuration and administration issues, as well as directly influence the direction of the Product domain on the MDM platform. The candidate will also collaborate with other technology partners to design and build quality, highly scalable solutions/applications, as well as interact with business teams as part of Agile development.
Duties and Responsibilities:
Design, develop, and test incoming and outgoing data feeds, data modeling, governance, and system administration as it pertains to MDM.
Responsible for providing technical consulting to management, business analysts, and technical associates, while working with the integration, architecture, and business teams to deliver MDM solutions.
Deliver high-quality solutions independently, while working collaboratively to share knowledge and ideas, and adapt quickly to the needs of the business.
Partner with Data Governance & Operations teams to deliver based on program/project needs.
Drive the architecture, design, and delivery of the end-state MDM solution in a hands-on manner, including modeling the MDM domains.
Establish monitoring and reporting capabilities for the MDM platform.
Engage with all levels across IT to deliver an Enterprise MDM Program solution (Product), including cross-functional coordination.
Help lead master data integration activities, which include, but are not limited to, data cleansing, data creation, data conversion, issue resolution, and data validation.
Identify, manage, and communicate issues, risks, and dependencies to project management.
Configuration of the MDM solution in a hands-on manner (Web UI, business rules, and workflow changes)
Provide support for the Master Data Management (MDM) platform, including technical architecture, inbound/outbound data integration (ETL, maintenance/tuning of match rules and exceptions, data model changes, executing and monitoring incremental updates, and working with infrastructure and DBA teams to maintain multiple environments).
Contribute to the design of logical and physical Data modeling to support the Enterprise Master Data Management system.
Establish & refine monitoring and reporting capabilities for the new MDM platform.
Provide level 3 support for the MDM platform as needed.
Manage Code configuration and code release management in Non-production environments.
Exceptional verbal communication and technical writing skills
Requirements and Qualifications:
8+ years of experience with Master Data Management solutions
5 - 8 years of experience working within the entire Software Development Lifecycle, including requirements gathering, design, implementation, integration testing, deployment, and post-production support
5 - 8+ years of experience in system design, implementation, and Level 3 support activities
Strong understanding of Master Data Management concepts, including Object Oriented Design, Programming, and Data Modeling
Strong experience in development configuration of Workflow, Web UI, and Business Rules Action components of MDM (STIBO STEP solution preferred)
Strong experience in identifying performance bottlenecks, providing a solution, and implementing functional performance recommendations
Experience in implementing deployment automation to support the data model / web-ui / JavaScript binds / configurations and product information between environments.
Experience in developing enterprise applications using Java, J2EE, JavaScript, HTML, CSS, Spring Framework
Working experience with at least one major MDM platform such as Oracle, Informatica, or Stibo (preferred)
Working experience in data profiling, data quality designing, and configuring MDM UI, workflows, and rules for business processes, preferably in a retail domain
Experience working in an Agile Environment
Experience working in an infrastructure environment (ability to assist in logins, restarting servers, etc.)
Experience working in Oracle as the main database or Linux operating systems
Experience with data modeling and data migration
Experience with security best practices of web applications to address vulnerabilities
Experience with application integration and middleware
Strong communication skills are required, with the ability to give and receive information, explain complex information in simple terms, and maintain a strong customer service approach to all users.
Knowledge of/prior DBA experience with SQL Server and/or Oracle is a plus. Minimum knowledge/experience in UNIX
Ability to work independently, creatively problem solve complex technical problems, and can provide guidance and training to others
Ability to provide accurate estimates of timeframes necessary to complete potential projects and develop project implementation plans
Bachelor's Degree in Computer Science or related experience
Desired Skills and Experience
Master Data Management (MDM), STIBO STEP, Oracle MDM, Informatica MDM, Java, J2EE, JavaScript, HTML, CSS, Spring Framework, Data Modeling, ETL, Data Integration, Data Migration, Data Profiling, Data Quality, Data Governance, Workflow Configuration, Web UI Development, Business Rules Configuration, System Design, Software Development Lifecycle (SDLC), Agile Development, SQL, Oracle Database, SQL Server, Linux, UNIX, Performance Tuning, Match Rules Configuration, Data Cleansing, Data Validation, API Integration, Middleware, Application Security, Requirements Gathering, Technical Architecture, Level 3 Support, Code Configuration Management, Release Management, Object Oriented Design, Data Conversion, Infrastructure Management, Technical Consulting, Cross-functional Collaboration, Issue Resolution, Risk Management, Technical Documentation
Bayside Solutions, Inc. is not able to sponsor any candidates at this time. Additionally, candidates for this position must qualify as a W2 candidate.
Bayside Solutions, Inc. may collect your personal information during the position application process. Please reference Bayside Solutions, Inc.'s CCPA Privacy Policy at *************************
Cloud Engineer
Philadelphia, PA jobs
Pride Health is hiring a Cloud Security Principal Engineer to support our client's medical facility based in Pennsylvania.
This is a 6-month contract with the possibility of an extension, competitive pay and benefits, and a great way to start working with a top-tier healthcare organization.
Job Title: Cloud Security Principal Engineer
Location: Philadelphia, PA 19104 (Hybrid)
Pay Range: $75/hr. - $80.00/hr.
Shift: Day Shift
Duration: 6 months + Possible extension
Job Duties:
Proven experience in securing a multi-cloud environment.
Proven experience with Identity and access management in the cloud.
Proven experience with all security service lines in a cloud environment and the supporting security tools and processes to be successful.
Demonstrate collaboration with internal stakeholders, vendors, and supporting teams to design, implement, and maintain security technologies across the network, endpoint, identity, and cloud infrastructure.
Drive continuous improvement and coverage of cloud security controls by validating alerts, triaging escalations, and working with the MSP to fine-tune detection and prevention capabilities.
Lead or support the development of incident response plans, engineering runbooks, tabletop exercises, and system hardening guides.
Ensure alignment of security architectures with policies, standards, and external frameworks such as NIST SP 800-53, HIPAA, PCI-DSS, CISA ZTMM, CIS Benchmarks, and Microsoft CAF Secure Methodology, AWS CAF, AWS Well-Architected framework, Google CAF.
Participate in design and governance forums to provide security input into infrastructure, DevSecOps, and cloud-native application strategies.
Assist with audits, compliance assessments, risk remediation plans, and evidence collection with internal compliance and external third-party stakeholders.
Required
Bachelor's Degree
At least twelve (12) years industry-related experience, including experience in one to two IT disciplines (such as technical architecture, network management, application development, middleware, information analysis, database management or operations) in a multitier environment.
At least six (6) years' experience with information security, regulatory compliance and risk management concepts.
At least three (3) years' experience with Identity and Access Management, user provisioning, Role Based Access Control, or control self-assessment methodologies and security awareness training.
Experience with Cloud and/or Virtualization technologies.
As a certified minority-owned business, Pride Global and its affiliates - including Russell Tobin, Pride Health, and Pride Now - are committed to creating a diverse environment and are proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, pregnancy, disability, age, veteran status, or other characteristics.
Pride Global offers eligible employee's comprehensive healthcare coverage (medical, dental, and vision plans), supplemental coverage (accident insurance, critical illness insurance and hospital indemnity), a 401(k)-retirement savings, life & disability insurance, an employee assistance program, identity theft protection, legal support, auto and home insurance, pet insurance, and employee discounts with some preferred vendors.
Palantir Engineer
Kansas City, MO jobs
Palantir Engineer
Compensation: $60 - $80 /hour, depending on experience
Inceed has partnered with a great company to help find a skilled Palantir Engineer to join their team!
Step into a dynamic role as a Senior Data Engineer, where you'll design and build reliable data pipelines, transforming raw data into clean, consumable datasets. Collaborate with product and analytics teams, ensuring data quality and lineage for downstream users. This hands-on position offers the opportunity to implement best practices for data governance and reliability.
Key Responsibilities & Duties:
Design and maintain scalable data pipelines
Collaborate to translate data requirements into solutions
Build data ingestion frameworks using Kafka Confluent
Implement ELT and ETL pipelines using PySpark and SQL
Manage data models and warehousing structures
Ensure data quality through validation mechanisms
Implement data governance practices
Develop automated monitoring and alerting mechanisms
Optimize data processing in GCP environments
Required Qualifications & Experience:
Experience with Palantir Foundry applications
Strong programming skills in Python and PySpark
Proficiency in SQL for data manipulation
Experience with Dataform, Dataproc, and BigQuery
Hands-on experience with Kafka and Confluent
Knowledge of Cloud Scheduler and Dataflow
Understanding of Data Governance principles
Experience using Git for version control
Nice to Have Skills & Experience:
Familiarity with DBT, Machine Learning, and AI concepts
Working knowledge of Infrastructure as Code (IaC)
Perks & Benefits:
3 different medical health insurance plans, dental, and vision insurance
Voluntary and Long-term disability insurance
Paid time off, 401k, and holiday pay
Weekly direct deposit or pay card deposit
If you are interested in learning more about the Palantir Engineer opportunity, please submit your resume for consideration. Our client is unable to provide sponsorship at this time.
We are Inceed, a staffing direct placement firm who believes in the possibility of something better. Our mission is simple: We're here to help every person, whether client, candidate, or employee, find and secure what's better for them.
Inceed is an equal opportunity employer. Inceed prohibits discrimination and harassment of any type and affords equal employment opportunities to employees and applicants without regard to race, color, religion, sex, sexual orientation, gender identity, or expression, pregnancy, age, national origin, disability status, genetic information, protected veteran status, or any other characteristic protected by law.
#IND
Platform Engineer
Minersville, PA jobs
Contract to hire
Hybrid (1 day/ week)
Pipersville, PA
The Platform Engineer (Developer Enablement) is a senior technical contributor focused on improving the developer experience and accelerating delivery through reusable frameworks, automation, and standardized tooling. This role builds the internal platforms, guardrails, documentation, and libraries that development teams rely on to produce consistent, high-quality code. Operating at the intersection of development, DevOps, QA, and architecture, the Platform Engineer ensures engineering best practices are embedded across the organization.
Key Responsibilities
Design, build, and maintain internal libraries, NuGet packages, shared components, and service templates
Create scaffolding, automation, and code generation tools to streamline project setup and development
Define and enforce coding standards, naming conventions, and validation rules
Implement automated quality controls using tools like SonarQube and CI/CD pipeline integrations
Develop and maintain internal frameworks for logging, telemetry, observability, and exception handling
Review cross-team code for adherence to standards and architectural alignment
Build tools and documentation that support developer onboarding and productivity
Collaborate with DevOps, QA, and architecture to create consistent development pathways (“golden paths”)
Provide technical mentorship to developers and advocate for best practices across teams
Required Qualifications
10+ years of software engineering experience, with 3+ years in developer tooling, frameworks, or enablement roles
Deep expertise in C# and .NET application architecture
Experience developing internal libraries, NuGet packages, shared components, or reusable frameworks
Strong background with code quality and analysis tools (e.g., SonarQube)
Experience with CI/CD integrations and automated development workflows
Strong understanding of structured logging, telemetry, and observability
Ability to influence engineering standards across multiple teams
Excellent communication and technical mentorship skills
Preferred Qualifications
Experience with: Dynatrace, Serilog, Microsoft Orleans, Temporal
Familiarity with Angular, Blazor, or reusable UI component patterns
Experience with developer onboarding, documentation, or productivity tooling
Background in automation, scaffolding, and environment setup tools
Eight Eleven Group provides equal employment opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, national origin, age, sex, citizenship, disability, genetic information, gender, sexual orientation, gender identity, marital status, amnesty or status as a covered veteran in accordance with applicable federal, state, and local laws
AI/ML Engineer
Concord, CA jobs
Title- AI/ML ( Platform Engineering)
Duration- 12+ months, contract to hire
Top Skills- AI/Ml, cloud, Kubernates
We are seeking a talented AI/ML Platform Engineer with a strong emphasis on Cloud to join our dynamic team. In this role, you will be responsible for designing, implementing, and maintaining AI/ML infrastructure and solutions that leverage the full power of cloud. Your expertise will be crucial in enabling our data scientists and machine-learning engineers to develop, deploy, and scale AI models efficiently and effectively.
Key Responsibilities:
Build robust AI/ML platforms on cloud, ensuring scalability, reliability, and performance.
Develop automated workflows and pipelines for model training, validation, deployment, and monitoring.
Work closely with data scientists, ML engineers, and other stakeholders to understand their needs and provide optimal solutions.
Continuously optimize the AI/ML infrastructure for cost, performance, and security.
Implement monitoring solutions to ensure the health and performance of AI/ML systems, and troubleshoot any issues that arise.
Maintain comprehensive documentation of the architecture, workflows, and best practices.
Preferred Qualifications:
Proven experience as an AI/ML engineer with a focus on platform engineering and Azure
Experience with machine learning frameworks and libraries such as TensorFlow, PyTorch, or Scikit-learn.
Experience in Terraform and Helm will be an advantage.
Strong understanding of LLMs and proven experience building platforms and applications that leverage them.
Experience with CI/CD pipelines, containerization (Docker), and orchestration (Kubernetes).
Strong understanding of data storage, processing, and ETL workflows.
Knowledge in chunking strategies for vector database
Knowledge on classification & Embedding models.
Familiarity with Agile methodologies and project management tools.
EEO:
“Mindlance is an Equal Opportunity Employer and does not discriminate in employment based on - Minority/Gender/Disability/Religion/LGBTQI/Age/Veterans.”
Endpoint Engineer
Denver, CO jobs
The Endpoint Engineer is responsible for deploying, managing, and supporting endpoint technologies across clinical and administrative environments. This role ensures that all devices meet organizational standards for performance, security, and compliance while supporting the unique workflows of a healthcare setting.
Knowledge, Skills, and Abilities
Proficiency in Microsoft 365, Windows Operating Systems, Intune, SCCM, Active Directory, and Group Policy.
Experience with mobile device management (MDM/MAM), such as Intune, JAMF, or Citrix XenMobile.
Familiarity with clinical workflows and healthcare endpoint technologies.
Strong troubleshooting skills across hardware, operating systems, and application environments.
Essential Functions
Endpoint Deployment & Configuration
Build, configure, and deploy desktops, laptops, tablets, thin clients, and mobile devices using enterprise imaging and management tools.
Ensure devices meet hospital security, compliance, and operational standards prior to release.
Advanced Technical Support
Provide Tier II/III support for escalated incidents, service requests, and problem management activities.
Troubleshoot complex hardware, software, and peripheral issues in clinical and administrative environments.
Partner with EUC Support and Help Desk teams to ensure proper issue resolution and knowledge transfer.
Systems & Application Management
Administer and support Microsoft 365, Intune, SCCM, and other enterprise endpoint management solutions.
Package, test, and deploy applications, patches, and updates to the endpoint environment.
Maintain and optimize Group Policy Objects (GPOs), login scripts, and device configurations.
Healthcare Technology Integration
Support clinical workstations, Workstations on Wheels (WoWs), printers, and medical-grade peripherals.
Collaborate with clinical application teams (e.g., Epic Analysts, Nursing Informatics, Clinical Applications) to ensure endpoint compatibility and workflow support.
Security & Compliance
Enforce endpoint security policies, including encryption, EDR, conditional access, and mobile device management.
Participate in vulnerability remediation, patch management, and compliance reporting activities.
Support HIPAA and regulatory requirements by maintaining endpoint security and data protection measures.
Project Participation & Execution
Contribute to enterprise IT projects such as OS upgrades, device lifecycle refreshes, and mobility initiatives.
Assist with design, testing, and rollout of new endpoint standards, technologies, and configurations.
Document project deliverables, lessons learned, and standard operating procedures.
Documentation & Knowledge Sharing
Develop and maintain technical documentation, SOPs, and knowledge base articles.
Provide technical training and guidance to junior staff and end users as needed.
Continuous Improvement
Monitor system performance, identify recurring issues, and recommend improvements.
Stay current with emerging EUC technologies, industry best practices, and healthcare IT trends.
Salesforce Engineer
Yardley, PA jobs
🚀 We're Hiring: Salesforce Engineer
📍 Hybrid to Yardley, PA OR Madison, WI OR Boise, ID
Professional Experience
3-5 years of hands-on Salesforce engineering/development experience.
Proven success designing scalable Salesforce solutions for sales & commercial teams.
Expertise in Apex, LWC, Visualforce, SOQL, APIs, and integration tools.
Experience implementing AI solutions (Einstein Analytics, predictive modeling, or third-party AI).
Strong experience integrating Salesforce with Pardot, Marketo, HubSpot, or similar platforms.
Implement Salesforce AgentForce and other AI tools to deliver predictive insights and intelligent automation.
Skills & Competencies
Strong analytical and problem-solving abilities.
Excellent communication skills - ability to clearly articulate work across teams is critical.
Experience working in Agile environments (Scrum/Kanban).
Ability to excel both independently and collaboratively in a fast-paced environment.
AI Engineer 4 - Must be local to Mechanicsville, VA 23116
Mechanicsville, VA jobs
Client is seeking an AI Engineer IV to implement, maintain, and manage cloud infrastructure for AI in Azure, as well as configure/maintain monitoring, IaC, and CI/CD for these resources.
**local candidates strongly preferred
**candidate must be able to come to Mechanicsville, VA to pick up laptop
Requirements:
Experience providing guidance on the implementation of AI resources in Azure, and experience implementing, maintaining, and monitoring those resources.
Experience performing regular patching and updates to ensure system security and stability.
Experience overseeing backup operations and recovery processes in mission-critical environments.
Experience planning & executing cloud migrations, and moving workloads from on-premises to the cloud.
Comfortable working in an environment that utilizes multiple frameworks/technologies, including .NET, Java, SQL, Oracle, Microsoft Dynamics, and more.
Experience providing support and troubleshooting issues for production applications.
Experience working with security tools such as Tenable to manage vulnerability remediation, etc.
Experience with tools like SCCM & SCOM for automated patch management & deployment.
Experience crafting disaster recovery (DR) plans to ensure minimal downtime and prevent data loss.
Strong networking knowledge - including understanding OSI layer functionality and network protocols.
Nice-to-haves:
Microsoft-focused certifications, such as: Azure Administrator Associate, Azure Solutions Architect, Azure AI Engineer Associate, etc.
Networking-focused certifications, such as: Network+, CCNA, etc.
Experience providing guidance on the implementation of AI resources in Azure, and experience implementing, maintaining, and monitoring those resources Required 3 Years
Experience with implementing, monitoring, and maintaining Azure environments. Required 8 Years
Experience performing regular patching and updates to ensure system security and stability. Required 10 Years
Experience overseeing backup operations and recovery processes in mission-critical environments. Required 10 Years
Experience planning & executing cloud migrations, and moving workloads from on-premises to the cloud. Required 10 Years
Experience providing support and troubleshooting issues for production applications. Required 10 Years
Experience with tools like SCCM & SCOM for automated patch management & deployment. Required 10 Years
AI Engineer
Washington, DC jobs
Job Title: Developer Premium II - AI Engineer
Duration: 7 Months with long term extension
Hybrid Onsite: 4 days per week from Day 1
AI Engineer: The AI Engineer will play a pivotal role in designing, developing, and deploying artificial intelligence solutions that enhance operational efficiency, automate decision-making, and support strategic initiatives for the environmental and social specialists within the client. This role is central to the VPU's digital transformation efforts and will contribute to the development of scalable, ethical, and innovative AI systems.
Qualifications and Experience
Education: Bachelor's or Master's degree in Computer Science, Data Science, Engineering, or related field.
Experience:
Minimum 3 years of experience in AI/ML model development and deployment.
Experience with MLOps tools (e.g., MLflow), Docker, and cloud platforms (AWS, Azure, GCP).
Proven track record in implementing LLMs, RAG, NLP model development and GenAI solutions.
Technical Skills:
Skilled in - Azure AI/Google Vertex Search, Vector Databases, fine-tuning the RAG, NLP model development, API Management (facilitates access to different sources of data)
Proficiency in Python, TensorFlow, PyTorch, and NLP frameworks.
Expertise deep learning, computer vision, and large language models.
Familiarity with REST APIs, NoSQL, and RDBMS.
Certifications (Preferred):
Microsoft Certified: Azure AI Engineer Associate
Google Machine Learning Engineer
SAFe Agile Software Engineer (ASE)
Certification in AI Ethics
Objectives of the Assignment:
Develop and implement AI models and algorithms tailored to business needs.
Integrate AI solutions into existing systems and workflows.
Ensure ethical compliance and data privacy in all AI initiatives.
Support user adoption through training and documentation.
Support existing AI solutions by refinement, troubleshooting, and reconfiguration
Scope of Work and Responsibilities:
AI Solution Development:
Collaborate with cross-functional teams to identify AI opportunities.
Train, validate, and optimize machine learning models.
Translate business requirements to technical specifications.
AI Solution Implementation
Develop code, deploy AI models and into production environments, and conduct ongoing model training
Monitor performance and troubleshoot issues and engage in fine-tuning the solutions to improve accuracy
Ensure compliance with ethical standards and data governance policies.
User Training and Adoption:
Conduct training sessions for stakeholders on AI tools.
Develop user guides and technical documentation.
Data Analysis and Research:
Collect, preprocess, and engineer large datasets for machine learning and AI applications.
Recommend and Implement Data Cleaning and Preparation
Analyse and use structured and unstructured data (including geospatial data) to extract features and actionable insights.
Monitor data quality, detect bias, and manage model/data drift in production environments.
Research emerging AI technologies and recommend improvements.
Governance, Strategy, Support, and Maintenance:
Advise client's staff on AI strategy and policy implications
Contribute to the team's AI roadmap and innovation agenda.
Provide continuous support and contribute towards maintenance and future enhancements.
Deliverables:
Work on Proof of Concepts to study the technical feasibility of AI Use Cases
Functional AI applications integrated into business systems.
Documentation of model/application architecture, training data, and performance metrics.
Training materials and user guides.
Develop, train, and deploy AI models tailored to business needs
“Mindlance is an Equal Opportunity Employer and does not discriminate in employment on the basis of - Minority/Gender/Disability/Religion/LGBTQI/Age/Veterans.”
Angular Engineer
Reading, PA jobs
A technology-driven organization is seeking an Angular Engineer to contribute to the design, development, and support of business-critical applications. In this role, you will work across multiple projects, troubleshoot production issues, and participate in the full software development lifecycle. You will also have opportunities to lead the design of select components, introduce new ideas based on industry trends, collaborate closely with cross-functional teams, and mentor junior engineers as part of a diverse technical group.
Responsibilities
Serve as a primary support contact for multiple applications.
Participate in the full application lifecycle, including designing, developing, testing, releasing, and supporting software.
Collaborate with product owners and technical/business leaders to understand requirements and acceptance criteria.
Develop, maintain, test, and troubleshoot applications, ensuring performance, reliability, and maintainability.
Support mission-critical applications and assist in resolving customer issues.
Design backend database schemas and contribute to overall system architecture.
Produce clean, well-documented, and maintainable code that follows defined standards.
Write unit and UI tests; leverage CI/CD pipelines for building and deploying code.
Triage production issues and work with multiple teams to perform root-cause analysis.
Assign and review tasks for junior and offshore engineers.
Participate in interviewing new engineering hires.
Provide input into standards, tools, conventions, and design patterns during discovery and decision-making processes.
Support users by addressing technical questions, concerns, and feasibility inquiries.
Perform other software development-related duties as assigned.
Requirements
Bachelor's Degree in Computer Science, Engineering, or equivalent work experience.
5-7 years of experience with applicable programming languages (e.g., Java, RPG).
Full-stack development experience using technologies such as React, Angular, jQuery, HTML, JavaScript, CSS, Spring Framework, Spring MVC, MyBatis, and RESTful APIs.
Understanding of technical project management principles.
Experience implementing design frameworks, patterns, and software development best practices.
Knowledge of industry technology strategies and modern engineering standards.
Experience with relational database design.
Knowledge of Agile methodologies.
Strong troubleshooting and problem-solving skills.
Ability to research emerging tools and frameworks.
Experience estimating medium to large development efforts.
Excellent communication and interpersonal skills.
Understanding of the full software development lifecycle.
Some exposure to DevOps tools and automation practices.
Ability to meet attendance expectations and work required hours.
Willingness to travel when needed and complete standard pre-employment processes (background checks, screenings, etc.).
SharePoint Engineer
Washington, DC jobs
BlueWater Federal is looking for a SharePoint Engineer to support the Department of Energy in Washington, DC.
As the SharePoint Engineer, you will be responsible for designing, implementing, and maintaining SharePoint environments and solutions. This includes configuring sites, libraries, workflows, and web parts, ensuring system security, and supporting business processes through automation and integration.
Responsibilities
• Install, configure, and maintain SharePoint servers (on-premises and/or SharePoint Online).
• Monitor system performance, troubleshoot issues, and apply patches or updates.
• Manage permissions, security settings, and compliance requirements.
• Design and deploy SharePoint solutions, including custom workflows, forms, and web parts.
• Migrate data and content from legacy systems to SharePoint using scripts or third-party tools.
• Customize SharePoint sites to meet organizational needs.
• Collaborate with IT teams and IA.
• Provide technical support to end-users and site owners and create documentation
• Ensure adherence to security standards and organizational policies.
• Maintain knowledge of SharePoint best practices and emerging technologies.
Qualifications
• Bachelor's degree
• 10+ years of experience with SharePoint administration with a deep understanding of SharePoint Architecture, features and best practices.
• Must have an active Top Secret clearance with the ability to obtain a Q and SCI clearance
• Proficiency in PowerShell scripting for automation.
• Experience with migrating SharePoint versions on-premises or online (preferably using ShareGate)
• SharePoint components (Search, Taxonomy, Managed Metadata).
• Patching SharePoint server to meet organization security standards.
• Experience with HTML, CSS, JavaScript, REST API, and SQL is preferred
BlueWater Federal Solutions is proud to be an Equal Opportunity Employer. All qualified candidates will be considered without regard to race, color, religion, national origin, age, disability, sexual orientation, gender identity, status as a protected veteran, or any other characteristic protected by law. BlueWater Federal Solutions is a VEVRAA federal contractor and we request priority referral of veterans.
Theater Engineer
Colorado Springs, CO jobs
BlueWater Federal is looking for a Theater Manager to support the analysis of user needs and develop the design and associated hardware and software recommendations to support the SEWS program
Responsibilities
• Support the analysis of user needs and develop the design and associated hardware and software recommendations to support those needs.
• Collaborate with SEWS contractor and government personnel to plan routine and emergency trips.
• Provide rotating 24/7 on-call Tier 2 system support for remote users, to identify and resolve hardware, software, and communication issues, document solutions, and develop recommendations to reduce the frequency of repairs.
• Respond to system outages to ensure issues are resolved per contract requirements.
• Support foreign partner system and network installation, maintenance, and sustainment.
• Support Emergency On-Site Sustainment (EOSS) travel to customer locations as required.
• Respond to system component failures or change requests and plan system change or restoral implementation.
• Plan, develop and conduct user training for existing staff as well as new CCMD and FMS users.
• Travel up to 50% in a year to Foreign Partner locations.
• Perform planning and execution for a single or multi-team sustainment and training trip.
• Update Technical Data Package as required to document system.
• Perform on-site sustainment including but not limited to system operational check out, inventory, system updates, equipment firmware updates and documentation updates.
Qualifications
3+ years of experience in systems administration, Tactical Combat Operations, and GCCS
• Must have an active Top Secret clearance with SCI Eligibility
• Knowledge of virtualization concepts and products (VMware); Microsoft Active Directory (AD) for user and groups; Microsoft Operating Systems (Server & Workstation)
• Familiarity with Oracle/Sybase/Postgres database maintenance; Java application servers (Tomcat, JBoss)
• Familiarity with Linux/UNIX applications and services (NFS, SSH, NTP, LDAP, HTTP, Ansible)
• DoD 8570 IAT Level II certification (Security+, CCNA Security, CySA+, GICSP, GSEC, CND, SSCP)
• Partner and Allied nation exercise experience is desired
BlueWater Federal Solutions is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, or protected veteran status and will not be discriminated against on the basis of disability.
We offer a competitive health and wellness benefits package, including medical, dental, and vision coverage. Our competitive compensation package includes generous 401k matching, employee stock purchase program, and life insurance options, and time off with pay. Salary range: 135-145K
Big Data Engineer
Santa Monica, CA jobs
Our client is seeking a Big Data Engineer to join their team! This position is located in Santa Monica, California.
Design and build core components of a large-scale data platform for both real-time and batch processing, owning key features of big data applications that evolve with business needs
Develop next-generation, cloud-based big data infrastructure supporting batch and streaming workloads, with continuous improvements to performance, scalability, reliability, and availability
Champion engineering excellence, promoting best practices such as design patterns, CI/CD, thorough code reviews, and automated testing
Drive innovation, contributing new ideas and applying cutting-edge technologies to deliver impactful solutions
Participate in the full software development lifecycle, including system design, experimentation, implementation, deployment, and testing
Collaborate closely with program managers, product managers, SDETs, and researchers in an open, agile, and highly innovative environment
Desired Skills/Experience:
Bachelor's degree in a STEM field such as: Science, Technology, Engineering, Mathematics
5+ years of relevant professional experience
4+ years of professional software development experience using Java, Scala, Python, or similar programming languages
3+ years of hands-on big data development experience with technologies such as Spark, Flink, SingleStore, Kafka, NiFi, and AWS big data tools
Strong understanding of system and application design, architecture principles, and distributed system fundamentals
Proven experience building highly available, scalable, and production-grade services
Genuine passion for technology, with the ability to work across interdisciplinary areas and adopt new tools or approaches
Experience processing massive datasets at the petabyte scale
Proficiency with cloud infrastructure and DevOps tools, such as Terraform, Kubernetes (K8s), Spinnaker, IAM, and ALB
Hands-on experience with modern data warehousing and analytics platforms, including ClickHouse, Druid, Snowflake, Impala, Presto, Kinesis, and more
Familiarity with common web development frameworks, such as Spring Boot, React.js, Vue.js, or Angular
Benefits:
Medical, Dental, & Vision Insurance Plans
Employee-Owned Profit Sharing (ESOP)
401K offered
The approximate pay range for this position is between $52.00 and $75.00. Please note that the pay range provided is a good faith estimate. Final compensation may vary based on factors including but not limited to background, knowledge, skills, and location. We comply with local wage minimums.
Data Engineer
Tempe, AZ jobs
About the Role
We are seeking a highly skilled Databricks Data Engineer with strong expertise in modern data engineering, Azure cloud technologies, and Lakehouse architectures. This role is ideal for someone who thrives in dynamic environments, enjoys solving complex data challenges, and can lead end-to-end delivery of scalable data solutions.
What We're Looking For
8+ years designing and delivering scalable data pipelines in modern data platforms
Deep experience in data engineering, data warehousing, and enterprise-grade solution delivery
Ability to lead cross-functional initiatives in matrixed teams
Advanced skills in SQL, Python, and ETL/ELT development, including performance tuning
Hands-on experience with Azure, Snowflake, and Databricks, including system integrations
Key Responsibilities
Design, build, and optimize large-scale data pipelines on the Databricks Lakehouse platform
Modernize and enhance cloud-based data ecosystems on Azure, contributing to architecture, modeling, security, and CI/CD
Use Apache Airflow and similar tools for workflow automation and orchestration
Work with financial or regulated datasets while ensuring strong compliance and governance
Drive best practices in data quality, lineage, cataloging, and metadata management
Primary Technical Skills
Develop and optimize ETL/ELT pipelines using Python, PySpark, Spark SQL, and Databricks Notebooks
Design efficient Delta Lake models for reliability and performance
Implement and manage Unity Catalog for governance, RBAC, lineage, and secure data sharing
Build reusable frameworks using Databricks Workflows, Repos, and Delta Live Tables
Create scalable ingestion pipelines for APIs, databases, files, streaming sources, and MDM systems
Automate ingestion and workflows using Python and REST APIs
Support downstream analytics for BI, data science, and application workloads
Write optimized SQL/T-SQL queries, stored procedures, and curated datasets
Automate DevOps workflows, testing pipelines, and workspace configurations
Additional Skills
Azure: Data Factory, Data Lake, Key Vault, Logic Apps, Functions
CI/CD: Azure DevOps
Orchestration: Apache Airflow (plus)
Streaming: Delta Live Tables
MDM: Profisee (nice-to-have)
Databases: SQL Server, Cosmos DB
Soft Skills
Strong analytical and problem-solving mindset
Excellent communication and cross-team collaboration
Detail-oriented with a high sense of ownership and accountability
Senior Data Engineer
Glendale, CA jobs
Our client is seeking a Senior Data Engineer to join their team! This position is located in Glendale, California.
Contribute to maintaining, updating, and expanding existing Core Data platform data pipelines
Build tools and services to support data discovery, lineage, governance, and privacy
Collaborate with other software and data engineers and cross-functional teams
Work with a tech stack that includes Airflow, Spark, Databricks, Delta Lake, Kubernetes, and AWS
Collaborate with product managers, architects, and other engineers to drive the success of the Core Data platform
Contribute to developing and documenting internal and external standards and best practices for pipeline configurations, naming conventions, and more
Ensure high operational efficiency and quality of Core Data platform datasets to meet SLAs and ensure reliability and accuracy for stakeholders in Engineering, Data Science, Operations, and Analytics
Participate in agile and scrum ceremonies to collaborate and refine team processes
Engage with customers to build relationships, understand needs, and prioritize both innovative solutions and incremental platform improvements
Maintain detailed documentation of work and changes to support data quality and data governance requirements
Desired Skills/Experience:
5+ years of data engineering experience developing large data pipelines
Proficiency in at least one major programming language such as: Python, Java or Scala
Strong SQL skills and the ability to create queries to analyze complex datasets
Hands-on production experience with distributed processing systems such as Spark
Experience interacting with and ingesting data efficiently from API data sources
Experience coding with the Spark DataFrame API to create data engineering workflows in Databricks
Hands-on production experience with data pipeline orchestration systems such as Airflow for creating and maintaining data pipelines
Experience developing APIs with GraphQL
Deep understanding of AWS or other cloud providers, as well as infrastructure-as-code
Familiarity with data modeling techniques and data warehousing best practices
Strong algorithmic problem-solving skills
Excellent written and verbal communication skills
Advanced understanding of OLTP versus OLAP environments
Benefits:
Medical, Dental, & Vision Insurance Plans
Employee-Owned Profit Sharing (ESOP)
401K offered
The approximate pay range for this position is between $51.00 and $73.00. Please note that the pay range provided is a good faith estimate. Final compensation may vary based on factors including but not limited to background, knowledge, skills, and location. We comply with local wage minimums.
AWS Data Engineer
McLean, VA jobs
Responsibilities:
Design, build, and maintain scalable data pipelines using AWS Glue and Databricks.
Develop and optimize ETL/ELT processes using PySpark and Python.
Collaborate with data scientists, analysts, and stakeholders to enable efficient data access and transformation.
Implement and maintain data lake and warehouse solutions on AWS (S3, Glue Catalog, Redshift, Athena, etc.).
Ensure data quality, consistency, and reliability across systems.
Optimize performance of large-scale distributed data processing workflows.
Develop automation scripts and frameworks for data ingestion, transformation, and validation.
Follow best practices for data governance, security, and compliance.
Required Skills & Experience:
5-8 years of hands-on experience in Data Engineering.
Strong proficiency in Python and PySpark for data processing and transformation.
Expertise in AWS services - particularly Glue, S3, Lambda, Redshift, and Athena.
Hands-on experience with Databricks for building and managing data pipelines.
Experience working with large-scale data systems and optimizing performance.
Solid understanding of data modeling, data lake architecture, and ETL design principles.
Strong problem-solving skills and ability to work independently in a fast-paced environment.
“Mindlance is an Equal Opportunity Employer and does not discriminate in employment on the basis of - Minority/Gender/Disability/Religion/LGBTQI/Age/Veterans.”