F5 Engineer
Charlotte, NC jobs
This person should be an expert with F5 engineering, load balancing, networking engineering, etc. They do not need to have a ton of
proxy experience, but someone who is strong enough in F5 for this role will likely have an understanding of proxy concepts as well. This team uses Skyhigh Web Gateway, but the experience does not need to be with that specific tool. The most important thing here is that the person has a TON of F5 experience.
Top Skills
Proxy
Firewall
Tippingpoint ids, ips
SSLO
Additional Skills & Qualifications
To architect platforms and design global enterprise solutions that adhere to information security requirements while meeting business needs to establish secure network connectivity leveraging varying content inspection systems for malware prevention, data loss prevention, and forensic analysis
Expertise in one of the following (in order of desirability):
McAfee Web Gateway,
Fortinet and Checkpoint Firewalls,
TippingPoint IDS/IPS,
FireEye (NX/VX/CM),
F5 (SSLO, ASM, ATM),
F5 LTM, GTM
Expertise with web proxies for advanced content filtering
Expertise in malware prevention and data loss prevention systems, including Day Zero threat prevention
MUST have extensive knowledge on fundamental networking concepts of DNS, DHCP, Firewalls. Load balancing, IPS, basic routing/switching; excellent understanding of TCP/IP and packet analysis
Expertise in creating Application and network diagrams including all pertinent flows and decisions.
Capability to summarize complex issues into executive summaries
Basic understanding of Cryptography, SSL certificates, SSL decryption / offload methodologies, HSM/HSMaaS
Understanding of cloud encryption and tokenization (i.e., Salesforce topology and integration of Salesforce / Servicenow clouds with Cloud encryption gateway)
Expertise in Virtualization, ESXi server management, vSphere, vCenter, vSAN, vMotion to transform hardware based infrastructure to virtual platforms
Experience in automation scripts (such as Ansible, Terraform)
Programming expertise; scripting/automation of various security products
Understanding of Machine learning, data modeling and perform advanced analytics
Expertise in Linux, Python, Apache, HTML + Bootstrap, and SQL.
Leveraging APIs to enhance automation routines.
5+ years of overall networking experience
Familiarity with the following tools and/or platforms helpful:
Zscaler, Radware, FireEye, Websense, Scansafe, Ironport, , Damballa, Vontu, Skyhigh, Palantir, Cloudera platforms
CipherCloud
Experience Level
Expert Level
Job Type & Location
This is a Contract position based out of Charlotte, NC.
Pay and Benefits
The pay range for this position is $65.00 - $85.00/hr.
Eligibility requirements apply to some benefits and may depend on your job
classification and length of employment. Benefits are subject to change and may be
subject to specific elections, plan, or program terms. If eligible, the benefits
available for this temporary role may include the following:
• Medical, dental & vision
• Critical Illness, Accident, and Hospital
• 401(k) Retirement Plan - Pre-tax and Roth post-tax contributions available
• Life Insurance (Voluntary Life & AD&D for the employee and dependents)
• Short and long-term disability
• Health Spending Account (HSA)
• Transportation benefits
• Employee Assistance Program
• Time Off/Leave (PTO, Vacation or Sick Leave)
Workplace Type
This is a fully remote position.
Application Deadline
This position is anticipated to close on Dec 12, 2025.
h4>About TEKsystems:
We're partners in transformation. We help clients activate ideas and solutions to take advantage of a new world of opportunity. We are a team of 80,000 strong, working with over 6,000 clients, including 80% of the Fortune 500, across North America, Europe and Asia. As an industry leader in Full-Stack Technology Services, Talent Services, and real-world application, we work with progressive leaders to drive change. That's the power of true partnership. TEKsystems is an Allegis Group company.
The company is an equal opportunity employer and will consider all applications without regards to race, sex, age, color, religion, national origin, veteran status, disability, sexual orientation, gender identity, genetic information or any characteristic protected by law.
About TEKsystems and TEKsystems Global Services
We're a leading provider of business and technology services. We accelerate business transformation for our customers. Our expertise in strategy, design, execution and operations unlocks business value through a range of solutions. We're a team of 80,000 strong, working with over 6,000 customers, including 80% of the Fortune 500 across North America, Europe and Asia, who partner with us for our scale, full-stack capabilities and speed. We're strategic thinkers, hands-on collaborators, helping customers capitalize on change and master the momentum of technology. We're building tomorrow by delivering business outcomes and making positive impacts in our global communities. TEKsystems and TEKsystems Global Services are Allegis Group companies. Learn more at TEKsystems.com.
The company is an equal opportunity employer and will consider all applications without regard to race, sex, age, color, religion, national origin, veteran status, disability, sexual orientation, gender identity, genetic information or any characteristic protected by law.
F5 Engineer
North Carolina jobs
This person should be an expert with F5 engineering, load balancing, networking engineering, etc. They do not need to have a ton of
proxy experience, but someone who is strong enough in F5 for this role will likely have an understanding of proxy concepts as well. This team uses Skyhigh Web Gateway, but the experience does not need to be with that specific tool. The most important thing here is that the person has a TON of F5 experience.
Top Skills
Proxy
Firewall
Tippingpoint ids, ips
SSLO
Additional Skills & Qualifications
To architect platforms and design global enterprise solutions that adhere to information security requirements while meeting business needs to establish secure network connectivity leveraging varying content inspection systems for malware prevention, data loss prevention, and forensic analysis
Expertise in one of the following (in order of desirability):
McAfee Web Gateway,
Fortinet and Checkpoint Firewalls,
TippingPoint IDS/IPS,
FireEye (NX/VX/CM),
F5 (SSLO, ASM, ATM),
F5 LTM, GTM
Expertise with web proxies for advanced content filtering
Expertise in malware prevention and data loss prevention systems, including Day Zero threat prevention
MUST have extensive knowledge on fundamental networking concepts of DNS, DHCP, Firewalls. Load balancing, IPS, basic routing/switching; excellent understanding of TCP/IP and packet analysis
Expertise in creating Application and network diagrams including all pertinent flows and decisions.
Capability to summarize complex issues into executive summaries
Basic understanding of Cryptography, SSL certificates, SSL decryption / offload methodologies, HSM/HSMaaS
Understanding of cloud encryption and tokenization (i.e., Salesforce topology and integration of Salesforce / Servicenow clouds with Cloud encryption gateway)
Expertise in Virtualization, ESXi server management, vSphere, vCenter, vSAN, vMotion to transform hardware based infrastructure to virtual platforms
Experience in automation scripts (such as Ansible, Terraform)
Programming expertise; scripting/automation of various security products
Understanding of Machine learning, data modeling and perform advanced analytics
Expertise in Linux, Python, Apache, HTML + Bootstrap, and SQL.
Leveraging APIs to enhance automation routines.
5+ years of overall networking experience
Familiarity with the following tools and/or platforms helpful:
Zscaler, Radware, FireEye, Websense, Scansafe, Ironport, , Damballa, Vontu, Skyhigh, Palantir, Cloudera platforms
CipherCloud
Experience Level
Expert Level
Job Type & Location
This is a Contract position based out of Charlotte, NC.
Pay and Benefits
The pay range for this position is $65.00 - $85.00/hr.
Eligibility requirements apply to some benefits and may depend on your job
classification and length of employment. Benefits are subject to change and may be
subject to specific elections, plan, or program terms. If eligible, the benefits
available for this temporary role may include the following:
• Medical, dental & vision
• Critical Illness, Accident, and Hospital
• 401(k) Retirement Plan - Pre-tax and Roth post-tax contributions available
• Life Insurance (Voluntary Life & AD&D for the employee and dependents)
• Short and long-term disability
• Health Spending Account (HSA)
• Transportation benefits
• Employee Assistance Program
• Time Off/Leave (PTO, Vacation or Sick Leave)
Workplace Type
This is a fully remote position.
Application Deadline
This position is anticipated to close on Dec 12, 2025.
h4>About TEKsystems:
We're partners in transformation. We help clients activate ideas and solutions to take advantage of a new world of opportunity. We are a team of 80,000 strong, working with over 6,000 clients, including 80% of the Fortune 500, across North America, Europe and Asia. As an industry leader in Full-Stack Technology Services, Talent Services, and real-world application, we work with progressive leaders to drive change. That's the power of true partnership. TEKsystems is an Allegis Group company.
The company is an equal opportunity employer and will consider all applications without regards to race, sex, age, color, religion, national origin, veteran status, disability, sexual orientation, gender identity, genetic information or any characteristic protected by law.
About TEKsystems and TEKsystems Global Services
We're a leading provider of business and technology services. We accelerate business transformation for our customers. Our expertise in strategy, design, execution and operations unlocks business value through a range of solutions. We're a team of 80,000 strong, working with over 6,000 customers, including 80% of the Fortune 500 across North America, Europe and Asia, who partner with us for our scale, full-stack capabilities and speed. We're strategic thinkers, hands-on collaborators, helping customers capitalize on change and master the momentum of technology. We're building tomorrow by delivering business outcomes and making positive impacts in our global communities. TEKsystems and TEKsystems Global Services are Allegis Group companies. Learn more at TEKsystems.com.
The company is an equal opportunity employer and will consider all applications without regard to race, sex, age, color, religion, national origin, veteran status, disability, sexual orientation, gender identity, genetic information or any characteristic protected by law.
AI Engineer
Raleigh, NC jobs
About the Role
We are seeking a Principal AI Engineer to spearhead the development of predictive analytics and machine learning capabilities from the ground up. You'll join a small, collaborative team of BI and Data Engineering professionals and play a pivotal role in transforming how the organization leverages data. Today, we focus on historical reporting-your mission is to enable forward-looking insights through advanced analytics and AI-driven solutions.
This is a hands-on leadership position where you'll partner closely with business stakeholders to understand their needs, identify high-impact opportunities, and deliver solutions that tell a compelling story through data. If you thrive in greenfield projects and want to make a measurable impact, this is the role for you.
Key Responsibilities
Design, develop, and deploy predictive models, machine learning algorithms, and AI-driven solutions to support strategic decision-making.
Build and optimize end-to-end ML pipelines, including data ingestion, feature engineering, model training, and deployment.
Collaborate with business leaders to translate requirements into actionable analytics initiatives and measurable outcomes.
Conduct exploratory data analysis to uncover patterns, trends, and opportunities for predictive insights.
Implement model evaluation and monitoring frameworks to ensure accuracy, scalability, and performance over time.
Integrate predictive insights into BI dashboards and reporting tools for business consumption.
Stay current with emerging technologies in AI/ML, deep learning, and cloud-based analytics platforms.
Serve as a technical mentor and thought leader for AI and advanced analytics within the organization.
Partner with DevOps teams to streamline deployment and operationalization of ML models in production environments.
Required Qualifications
Expertise in data science, machine learning, or AI engineering roles.
Proven expertise in predictive analytics, statistical modeling, and algorithm development.
Strong programming skills in Python and R, with experience using ML frameworks such as TensorFlow, PyTorch, or Scikit-learn.
Hands-on experience with cloud platforms (Azure preferred; AWS/GCP acceptable) for ML model deployment and scaling.
Solid understanding of data engineering concepts, including ETL processes and data pipeline design.
Familiarity with BI tools (Power BI, Tableau) and ability to embed predictive insights into dashboards.
Excellent communication and stakeholder engagement skills-able to explain complex technical concepts in business terms.
Bachelor's degree in Computer Science, Data Science, or related field (or equivalent experience).
Preferred Skills
Experience with SAP analytics or SAP environments.
Advanced degree (MS or PhD) in a quantitative discipline is a plus.
Knowledge of MLOps practices for continuous integration and deployment of ML models.
Why This Role?
Opportunity to build predictive analytics capabilities from the ground up.
Highly visible position with strong executive support.
Competitive compensation up to $190K and hybrid
Active Directory Engineer (AD CS, Certificate Web Enrollment, NDES and Online Responder)
Tampa, FL jobs
S3/Strategic Staffing Solutions has an Active Directory Engineering opportunity for a leading utilities client in Tampa, FL. Please review the following if you are interested in joining a leading organization!
Duration: 12 months + possible extension
Pay Rate: $50-60/hr. W2. W2 only, sorry we cannot do C2C
Qualifications & Description:
Must have a solid understanding of the AD CS role services like Certificate Web Enrollment, NDES and Online Responder.
We are seeking a highly skilled Senior IT Contractor to lead and manage our enterprise Certificate Management operations, with a strong focus on Microsoft Certificate Management and Active Directory integration. This role is critical to ensuring the security, reliability, and compliance of our digital identity infrastructure.
Key Responsibilities:
Oversee the lifecycle management of digital certificates across the enterprise.
Administer and maintain Microsoft Certificate Services, including deployment, renewal, revocation, and auditing.
Integrate certificate management with Microsoft Active Directory and Group Policy for automated certificate enrollment.
Develop and enforce certificate policies, standards, and procedures.
Monitor certificate expiration and proactively mitigate risks of service disruption.
Collaborate with security, infrastructure, and application teams to support secure communications and authentication.
Troubleshoot certificate-related issues across various platforms and services.
Senior Data Engineer
Charlotte, NC jobs
**NO 3rd Party vendor candidates or sponsorship**
Role Title: Senior Data Engineer
Client: Global construction and development company
Employment Type: Contract
Duration: 1 year
Preferred Location: Remote based in ET or CT time zones
Role Description:
The Senior Data Engineer will play a pivotal role in designing, architecting, and optimizing cloud-native data integration and Lakehouse solutions on Azure, with a strong emphasis on Microsoft Fabric adoption, PySpark/Spark-based transformations, and orchestrated pipelines. This role will lead end-to-end data engineering-from ingestion through APIs and Azure services to curated Lakehouse/warehouse layers-while ensuring scalable, secure, well-governed, and well-documented data products. The ideal candidate is hands-on in delivery and also brings data architecture knowledge to help shape patterns, standards, and solution designs.
Key Responsibilities
Design and implement end-to-end data pipelines and ELT/ETL workflows using Azure Data Factory (ADF), Synapse, and Microsoft Fabric.
Build and optimize PySpark/Spark transformations for large-scale processing, applying best practices for performance tuning (partitioning, joins, file sizing, incremental loads).
Develop and maintain API-heavy ingestion patterns, including REST/SOAP integrations, authentication/authorization handling, throttling, retries, and robust error handling.
Architect scalable ingestion, transformation, and serving solutions using Azure Data Lake / OneLake, Lakehouse patterns (Bronze/Silver/Gold), and data warehouse modeling practices.
Implement monitoring, logging, alerting, and operational runbooks for production pipelines; support incident triage and root-cause analysis.
Apply governance and security practices across the lifecycle, including access controls, data quality checks, lineage, and compliance requirements.
Write complex SQL, develop data models, and enable downstream consumption through analytics tools and curated datasets.
Drive engineering standards: reusable patterns, code reviews, documentation, source control, and CI/CD practices.
Requirements:
Bachelor's degree (or equivalent experience) in Computer Science, Engineering, or a related field.
5+ years of experience in data engineering with strong focus on Azure Cloud.
Strong experience with Azure Data Factory pipelines, orchestration patterns, parameterization, and production support.
Strong hands-on experience with Synapse (pipelines, SQL pools and/or Spark), and modern cloud data platform patterns.
Advanced PySpark/Spark experience for complex transformations and performance optimization.
Heavy experience with API-based integrations (building ingestion frameworks, handling auth, pagination, retries, rate limits, and resiliency).
Strong knowledge of SQL and data warehousing concepts (dimensional modeling, incremental processing, data quality validation).
Strong understanding of cloud data architectures including Data Lake, Lakehouse, and Data Warehouse patterns.
Preferred Skills
Experience with Microsoft Fabric (Lakehouse/Warehouse/OneLake, Pipelines, Dataflows Gen2, notebooks).
Architecture experience (formal or informal), such as contributing to solution designs, reference architectures, integration standards, and platform governance.
Experience with DevOps/CI-CD for data engineering using Azure DevOps or GitHub (deployment patterns, code promotion, testing).
Experience with Power BI and semantic model considerations for Lakehouse/warehouse-backed reporting.
Familiarity with data catalog/governance tooling (e.g., Microsoft Purview).
Data Engineer - Hadoop
New York, NY jobs
Data Engineer - Hadoop Administrator
HIGHLIGHTS
Direct Hire
Compensation: BOE
We are seeking a Data Engineer to support Newton, our Data Science R&D compute cluster. This role functions as a Hadoop Administrator embedded within the ML Ops organization, providing hands-on operational support for the platform while partnering directly with data scientists, DevOps, and infrastructure teams. This individual will ensure the health, stability, performance, and usability of the Newton cluster, acting as the primary point of contact for platform support, troubleshooting, and environment optimization.
This is a highly collaborative and technical role with room for long-term career progression.
Key Responsibilities
• Serve as the primary administrator for the Newton Hadoop/Cloudera cluster.
• Provide direct support to data scientists experiencing issues with jobs, workloads, dependencies, cluster resources, or environment performance.
• Troubleshoot complex Hadoop, Spark, Python, and OS-level issues; drive root cause analysis and implement permanent fixes.
• Coordinate closely with DevOps to ensure patching, upgrades, infrastructure changes, and system reliability activities are completed on schedule.
• Monitor cluster performance, capacity, and resource utilization; tune and optimize for efficiency and cost.
• Manage Hadoop and Cloudera configurations, services, security, policies, and operational health.
• Implement automation and scripting to improve operational workflows and reduce manual intervention.
• Validate vendor patches, updates, and upgrades and coordinate deployments with DevOps and infrastructure teams.
• Maintain documentation, operational runbooks, troubleshooting guides, and environment standards.
• Serve as a liaison between Data Science, ML Ops, Infrastructure, and DevOps teams to ensure seamless platform operations.
• Support the organization's commitment to protecting the integrity, availability, and confidentiality of systems and data.
Required Technical Skills
• Strong hands-on experience with Hadoop administration, ideally within Cloudera environments.
• Proficiency with Python, particularly for automation and data workflows.
• Experience with Apache Spark (supporting jobs, tuning performance, understanding resource usage).
• Solid understanding of Linux/Unix systems administration, shell scripting, permissions, networking basics, and OS-level troubleshooting.
• Experience supporting distributed compute environments or large-scale data platforms.
• Familiarity with DevOps collaboration (patching, upgrades, deployments, incident response, etc.).
Required Soft Skills & Competencies
• Excellent communication skills with the ability to work directly with data scientists and technical end users.
• Ability to coordinate with multiple technical teams (DevOps, Infrastructure, ML Ops).
• Strong troubleshooting and problem-solving capabilities.
• Ability to manage multiple priorities in a fast-moving environment.
Preferred Skills (Nice to Have)
• Experience with ML Ops environments or supporting machine learning workflows.
• Experience with cluster performance optimization and capacity planning.
• Background in distributed systems or data engineering.
Data Engineer
New York, NY jobs
Data Engineer - Data Migration Project
6-Month Contract (ASAP Start)
Hybrid - Manhattan, NY (3 days/week)
We are seeking a Data Engineer to support a critical data migration initiative for a leading sports entertainment and gaming company headquartered in Manhattan, NY. This role will focus on transitioning existing data workflows and analytics pipelines from Amazon Redshift to Databricks, optimizing performance and ensuring seamless integration across operational reporting systems. The ideal candidate will have strong SQL and Python skills, experience working with Salesforce data, and a background in data engineering, ETL, or analytics pipeline optimization. This is a hybrid role requiring collaboration with cross-functional analytics, engineering, and operations teams to enhance data reliability and scalability.
Minimum Qualifications:
Advanced proficiency in SQL, Python, and SOQL
Hands-on experience with Databricks, Redshift, Salesforce, and DataGrip
Experience building and optimizing ETL workflows and pipelines
Familiarity with Tableau for analytics and visualization
Strong understanding of data migration and transformation best practices
Ability to identify and resolve discrepancies between data environments
Excellent analytical, troubleshooting, and communication skills
Responsibilities:
Modify and migrate existing workflows and pipelines from Redshift to Databricks.
Rebuild data preprocessing structures that prepare Salesforce data for Tableau dashboards and ad hoc analytics.
Identify and map Redshift data sources to their Databricks equivalents, accounting for any structural or data differences.
Optimize and consolidate 200+ artifacts to improve efficiency and reduce redundancy.
Implement Databricks-specific improvements to leverage platform capabilities and enhance workflow performance.
Collaborate with analytics and engineering teams to ensure data alignment across business reporting systems.
Apply a “build from scratch” mindset to design scalable, modernized workflows rather than direct lift-and-shift migrations.
Identify dependencies on data sources not yet migrated and assist in prioritization efforts with the engineering team.
What's in it for you?
Opportunity to lead a high-impact data migration initiative at a top-tier gaming and entertainment organization.
Exposure to modern data platforms and architecture, including Databricks and advanced analytics workflows.
Collaborative environment with visibility across analytics, operations, and engineering functions.
Ability to contribute to the foundation of scalable, efficient, and data-driven decision-making processes.
EEO Statement:
Eight Eleven Group provides equal employment opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, national origin, age, sex, citizenship, disability, genetic information, gender, sexual orientation, gender identity, marital status, amnesty or status as a covered veteran in accordance with applicable federal, state, and local laws.
Lead Data Engineer
Roseland, NJ jobs
Job Title: Lead Data Engineer.
Hybrid Role: 3 Times / Week.
Type: 12 Months Contract - Rolling / Extendable Contract.
Work Authorization: Candidates must be authorized to work in the U.S. without current or future sponsorship requirements.
Must haves:
AWS.
Databricks.
Lead experience- this can be supplemented for staff as well.
Python.
Pyspark.
Contact Center Experience is a nice to have.
Job Description:
As a Lead Data Engineer, you will spearhead the design and delivery of a data hub/marketplace aimed at providing curated client service data to internal data consumers, including analysts, data scientists, analytic content authors, downstream applications, and data warehouses. You will develop a service data hub solution that enables internal data consumers to create and maintain data integration workflows, manage subscriptions, and access content to understand data meaning and lineage. You will design and maintain enterprise data models for contact center-oriented data lakes, warehouses, and analytic models (relational, OLAP/dimensional, columnar, etc.). You will collaborate with source system owners to define integration rules and data acquisition options (streaming, replication, batch, etc.). You will work with data engineers to define workflows and data quality monitors. You will perform detailed data analysis to understand the content and viability of data sources to meet desired use cases and help define and maintain enterprise data taxonomy and data catalog. This role requires clear, compelling, and influential communication skills. You will mentor developers and collaborate with peer architects and developers on other teams.
TO SUCCEED IN THIS ROLE:
Ability to define and design complex data integration solutions with general direction and stakeholder access.
Capability to work independently and as part of a global, multi-faceted data warehousing and analytics team.
Advanced knowledge of cloud-based data engineering and data warehousing solutions, especially AWS, Databricks, and/or Snowflake.
Highly skilled in RDBMS platforms such as Oracle, SQLServer.
Familiarity with NoSQL DB platforms like MongoDB.
Understanding of data modeling and data engineering, including SQL and Python.
Strong understanding of data quality, compliance, governance and security.
Proficiency in languages such as Python, SQL, and PySpark.
Experience in building data ingestion pipelines for structured and unstructured data for storage and optimal retrieval.
Ability to design and develop scalable data pipelines.
Knowledge of cloud-based and on-prem contact center technologies such as Salesforce.com, ServiceNow, Oracle CRM, Genesys Cloud, Genesys InfoMart, Calabrio Voice Recording, Nuance Voice Biometrics, IBM Chatbot, etc., is highly desirable.
Experience with code repository and project tools such as GitHub, JIRA, and Confluence.
Working experience with CI/CD (Continuous Integration & Continuous Deployment) process, with hands-on expertise in Jenkins, Terraform, Splunk, and Dynatrace.
Highly innovative with an aptitude for foresight, systems thinking, and design thinking, with a bias towards simplifying processes.
Detail-oriented with strong analytical, problem-solving, and organizational skills.
Ability to clearly communicate with both technical and business teams.
Knowledge of Informatica PowerCenter, Data Quality, and Data Catalog is a plus.
Knowledge of Agile development methodologies is a plus.
Having a Databricks data engineer associate certification is a plus but not mandatory.
Data Engineer Requirements:
Bachelor's degree in computer science, information technology, or a similar field.
8+ years of experience integrating and transforming contact center data into standard, consumption-ready data sets incorporating standardized KPIs, supporting metrics, attributes, and enterprise hierarchies.
Expertise in designing and deploying data integration solutions using web services with client-driven workflows and subscription features.
Knowledge of mathematical foundations and statistical analysis.
Strong interpersonal skills.
Excellent communication and presentation skills.
Advanced troubleshooting skills.
Regards,
Purnima Pobbathy
Senior Technical Recruiter
************
| ********************* |Themesoft Inc |
DevOps Engineer
Jersey City, NJ jobs
Grow your career as a DevOps Engineer with an innovative global bank in Jersey City, NJ. Contract role with strong possibility of extension. Will require working a hybrid schedule 2-3 days onsite per week.
Join one of the world's most renowned global banks and trusted brand with over 200 years of continuously evolving financial services worldwide. You will work alongside some of the smartest minds in the industry who are excited to share their knowledge and to learn from you.
Contract Duration: 12+ Months
Required Skills & Experience
Bachelor's degree in Computer Science, Engineering, or a related field, or equivalent practical experience.
10+ years of experience in a DevOps, Site Reliability Engineering (SRE), or Infrastructure Engineering role.
Proven expertise with at least one major cloud provider (e.g., AWS, Azure, GCP), including services like compute, networking, storage, and databases, specifically within a Linux environment.
Proficiency in scripting and programming languages (e.g., Java, Bash, SQL, Python) for automation and tool development.
Extensive experience with CI/CD tools (e.g., Jenkins, Tekton, TeamCity).
Solid understanding and hands-on experience with containerization technologies (Docker) and orchestration platforms (Kubernetes, OpenShift).
Experience with monitoring and logging solutions (e.g., Prometheus, Grafana, ELK Stack, Datadog).
Experience with Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Ansible.
Strong understanding of Linux operating systems and networking fundamentals.
Experience with Oracle databases.
Excellent communication and collaboration skills.
Desired Skills & Experience
Relevant professional certifications (e.g., AWS Certified DevOps Engineer, Kubernetes Administrator, Oracle Certified Professional).
Experience with microservices architectures and serverless computing.
Knowledge of security best practices, compliance frameworks (e.g., ISO 27001, SOC 2), and security tools.
What You Will Be Doing
Design, implement, and manage highly available, scalable, and secure cloud infrastructure, including Virtual Machines, OpenShift, and Kubernetes.
Develop, maintain, and optimize CI/CD pipelines to enable rapid and reliable software deployment.
Conduct and participate in Disaster Recovery Testing to ensure system resiliency and business continuity.
Implement and manage Infrastructure as Code (IaC) using tools like Terraform, CloudFormation, or Ansible for automated provisioning and configuration.
Monitor system performance, troubleshoot complex issues, and ensure proactive incident resolution across our environments.
Automate operational tasks, build tools, and improve processes to enhance efficiency and reduce manual effort.
Collaborate closely with development teams to integrate DevOps practices, promote best practices, and improve system reliability and performance.
Implement and enforce security best practices and compliance standards within the infrastructure and operational workflows.
Participate in an on-call rotation to support critical production systems and respond to incidents.
Data Engineer
Hamilton, NJ jobs
Key Responsibilities:
Manage and support batch processes and data pipelines in Azure Databricks and Azure Data Factory.
Integrate and process Bloomberg market data feeds and files into trading or analytics platforms.
Monitor, troubleshoot, and resolve data and system issues related to trading applications and market data ingestion.
Develop, automate, and optimize ETL pipelines using Python, Spark, and SQL.
Manage FTP/SFTP file transfers between internal systems and external vendors.
Ensure data quality, completeness, and timeliness for downstream trading and reporting systems.
Collaborate with operations, application support, and infrastructure teams to resolve incidents and enhance data workflows.
Required Skills & Experience:
10+ years of experience in data engineering or production support within financial services or trading environments.
Hands-on experience with Azure Databricks, Azure Data Factory, and Azure Storage, Logic Apps, Fabric.
Strong Python and SQL programming skills.
Experience with Bloomberg data feeds (BPIPE, TSIP,SFTP).
Experience with Git, CI/CD pipelines, and Azure DevOps.
Proven ability to support batch jobs, troubleshoot failures, and manage job scheduling.
Experience handling FTP/SFTP file transfers and automation (e.g., using scripts or managed file transfer tools).
Solid understanding of equities trading, fixed income trading, trading workflows, and financial instruments.
Excellent communication, problem-solving, and stakeholder management skills.
Data Engineer
New York, NY jobs
Role: Data Engineer Type: Contract-to-Hire or Full-Time Domain: Finance preferred, not mandatory
Key Skills & Requirements:
Must-have: Strong Python and Informatica experience
AI/ML Exposure: Familiarity with building and fine-tuning models; not a heavy AI/ML developer role
ETL & Data Engineering: Hands-on experience with ETL pipelines, data modeling, and “data plumbing”
Nice-to-have: Snowflake ecosystem experience, HR systems exposure
Ability to collaborate with business teams handling Data Science
Candidate Profile:
Hands-on Data Engineer comfortable building data pipelines and models
Exposure to AI/ML concepts without being a full AI specialist
Finance domain experience is a plus
Software Engineer III[80606]
New York, NY jobs
Onward Search is partnering with a leading tech client to hire a Software Engineer III to help build the next generation of developer infrastructure and tooling. If you're passionate about making developer workflows faster, smarter, and more scalable, this is the role for you!
Location: 100% Remote (EST & CST Preferred)
Contract Duration: 6 months
What You'll Do:
Own and maintain Bazel build systems and related tooling
Scale monorepos to millions of lines of code
Collaborate with infrastructure teams to define best-in-class developer workflows
Develop and maintain tools for large-scale codebases
Solve complex problems and improve developer productivity
What You'll Need:
Experience with Bazel build system and ecosystem (e.g., rules_jvm_external, IntelliJ Bazel plugin)
Fluency in Java, Python, Starlark, and TypeScript
Strong problem-solving and collaboration skills
Passion for building highly productive developer environments
Perks & Benefits:
Medical, Dental, and Vision Insurance
Life Insurance
401k Program
Commuter Benefits
eLearning & Education Reimbursement
Ongoing Training & Development
This is a fully remote, contract opportunity for a motivated engineer who loves working in a flow-focused environment and improving developer experiences at scale.
Data Engineer
Jersey City, NJ jobs
Mastech Digital Inc. (NYSE: MHH) is a minority-owned, publicly traded IT staffing and digital transformation services company. Headquartered in Pittsburgh, PA, and established in 1986, we serve clients nationwide through 11 U.S. offices.
Role: Data Engineer
Location: Merrimack, NH/Smithfield, RI/Jersey City, NJ
Duration: Full-Time/W2
Job Description:
Must-Haves:
Python for running ETL batch jobs
Heavy SQL for data analysis, validation and querying
AWS and the ability to move the data through the data stages and into their target databases.
The Postgres database is the target, so that is required.
Nice to haves:
Snowflake
Java for API development is a nice to have (will teach this)
Experience in asset management for domain knowledge.
Production support debugging and processing of vendor data
The Expertise and Skills You Bring
A proven foundation in data engineering - bachelor's degree + preferred, 10+ years' experience
Extensive experience with ETL technologies
Design and develop ETL reporting and analytics solutions.
Knowledge of Data Warehousing methodologies and concepts - preferred
Advanced data manipulation languages and frameworks (JAVA, PYTHON, JSON) - required
RMDS experience (Snowflake, PostgreSQL ) - required
Knowledge of Cloud platforms and Services (AWS - IAM, EC2, S3, Lambda, RDS ) - required
Designing and developing low to moderate complex data integration solution - required
Experience with DevOps, Continuous Integration and Continuous Delivery (Maven, Jenkins, Stash, Ansible, Docker) will be preferred
Expert in SQL and Stored Procedures on any Relational databases
Good in debugging, analyzing and Production Support
Application Development based on JIRA stories (Agile environment)
Demonstrable experience with ETL tools (Informatica, Snaplogic)
Experience in working with Python in an AWS environment
Create, update, and maintain technical documentation for software-based projects and products.
Solving production issues.
Interact effectively with business partners to understand business requirements and assist in generation of technical requirements.
Participate in architecture, technical design, and product implementation discussions.
Working Knowledge of Unix/Linux operating systems and shell scripting
Experience with developing sophisticated Continuous Integration & Continuous Delivery (CI/CD) pipeline including software configuration management, test automation, version control, static code analysis.
Excellent interpersonal and communication skills
Ability to work with global Agile teams
Proven ability to deal with ambiguity and work in fast paced environment
Ability to mentor junior data engineers.
The Value You Deliver
The associate would help the team in designing and building a best-in-class data solutions using very diversified tech stack.
Strong experience of working in large teams and proven technical leadership capabilities
Knowledge of enterprise-level implementations like data warehouses and automated solutions.
Ability to negotiate, influence and work with business peers and management.
Ability to develop and drive a strategy as per the needs of the team
Good to have: Full-Stack Programming knowledge, hands-on test case/plan preparation within Jira
Ruby on Rails Staff Engineer
Tampa, FL jobs
About Us:
We are working with a mission-driven SaaS company dedicated to keeping people safe. They're passionate about public safety and strive to create innovative solutions that keep their customer's happy.
Job Description:
As a Senior Staff Engineer, you will play a crucial role in building and maintaining our cutting-edge web applications. You will work closely with our engineering team to design, develop, and deploy robust, scalable, and user-friendly features.
Onsite in Tampa, FL Area
Up to 220k base salary
Responsibilities:
Design, develop, and maintain backend applications using Ruby on Rails.
Build user-friendly and responsive frontend interfaces using React.
Collaborate with cross-functional teams to define and implement new features.
Write clean, well-tested, and efficient code.
Qualifications:
Strong proficiency in Ruby on Rails, React, and React Native.
Experience with relational databases (e.g., PostgreSQL).
Solid understanding of JavaScript and web development fundamentals.
Familiarity with RESTful APIs and microservices architecture.
Founding Robotics Software Engineer
New York, NY jobs
Salary Range: $150,000 - $250,000 + Equity
Working Arrangement: Full Time - On-Site
Cubiq is currently representing an award-winning early-stage Y Combinator-backed start-up in their pursuit to find a founding Robotics Software engineer, to enable the deployment and growth of their Natural Language Robotics Interface.
This company is still very small, headed up by 2 ex-Google AI engineers, who were instrumental in the development of Claude. They're looking to add a Software engineer who can implement their AI algorithms into any and all types of robots to be easily controlled by anyone.
The role will see the successful candidate becoming a key part of these plans and the company as a whole as they continue to grow. You will be building the core infrastructure and services for the system, Designing APIs and interfaces between the AI models, hardware, and human operators. Working across the stack from Robot-facing services to real-time agent orchestration.
This is an on-site position in central New York, and is offering a salary between $140,000 -$220,000 and can stretch for the right candidate. There is full healthcare coverage and a 401K match. But the equity on offer is the real benefit to this role.
The right candidate will need the following experience:
Previous experience working with an Embodied AI system
2+ years of Software experience working with an LLM, RAG, or VLM
Strong Python experience
Experience at a scaling real-time system, data pipelines or working on integrating AI/ML models
High Agency and good communication skills
If you have the experience mentioned above, apply immediately! Interviews are already happening!
Junior Technical Release Engineer
Charlotte, NC jobs
Brooksource is seeking a Junior Technical Release Engineer to join our Fortune 500 banking client in the Charlotte, NC area. You will be joining our clients BILD (Banking, Invest, Lending, and Digital) Tech team. This role will support our Release Management and Environments Management functions by tracking and driving strategic work efforts, identifying and implementing process improvements, maintaining team documentation, and coordinating test environment needs for strategic initiatives. This role will be key to ensuring the team stays focused on our strategic goals of modernization, automation, and engineering excellence.
This position is ideal for recent graduates from universities or boot-camps, veterans, or individuals with up to one year of professional IT experience and a long term interest in technology.
Logistics:
Charlotte, NC (Hybrid 3 days onsite)
Full time (40 hours per week)
First year salary: $62,000+
Start Date: February 2026
We are unable to provide sponsorship currently
Responsibilities:
Project Management: Own the project plan for our release automation strategy and other modernization and improvement initiatives.
Process Improvement: Identify and implement process improvements to enhance release and environments management functions.
Documentation: Create, maintain, and improve team documentation in Confluence.
Take ownership: Proactively drive resolution of escalated environment outages, team blockers, and other impediments with unclear ownership.
Environment Coordination: Partner with delivery and tech leads from development teams of incoming strategic initiatives to identify special environment needs and track the status of environment configuration activities.
Qualifications:
Bachelor's degree or Bootcamp in Computer Science, Information Technology, or related field.
Strong organizational skills with the ability to handle multiple tasks simultaneously.
Excellent communication skills, both written and verbal.
Basic understanding of software development lifecycle (SDLC) and release management processes.
Ability to work collaboratively in a team environment.
Detail-oriented with a commitment to quality.
Top Skills Needed:
Excellent team player with good organizational, communicational, analytical, and logical skills.
Familiarity with Agile Methodologies
Basic understanding of the Software Development Lifecycle
Nice to Haves:
Experience with Jira or related project tracking tools
Experience with release management tools and version control systems
Brooksource provides equal employment opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, national origin, age, sex, citizenship, disability, genetic information, gender, sexual orientation, gender identity, marital status, amnesty or status as a covered veteran in accordance with applicable federal, state, and local laws.
System Engineer
New York, NY jobs
NYC-Based Hedge Fund / Midtown Manhattan
Our client, a dynamic NYC based hedge fund / investment management firm, is seeking a System Administrator to join its in house technology team. The firm offers an incredible suite of benefits, including profit sharing, fully paid health, dental and vision benefits and the chance to learn and grow alongside an incredible team of technologists.
As a System Engineer, you'll work closely with all areas of the IT organization and other internal stakeholders to provide both onsite and remote support, maintain system uptime, and contribute towards IT projects and Information Security initiatives.
Core Responsibilities:
Provide day-to-day support for end users across Windows and Mac environments, troubleshooting desktop, laptop, and mobile device issues
Deploy and configure new workstations, manage software and application installations, and administer company devices using Microsoft Intune
Oversee employee onboarding and offboarding, including IT orientations, account provisioning, and hardware setup
Troubleshoot and support widely used software platforms such as Microsoft 365, SharePoint, Bloomberg, Adobe, Zoom, and Microsoft Teams
Maintain detailed documentation of help desk tickets to support root cause analysis and ongoing issue resolution
Assist with research and initiatives related to information security, and support broader IT infrastructure and technology projects as needed
Qualifications:
Bachelor's degree in Information Technology, Computer Science, or a related field preferred
6+ years of experience in IT support or system administration within a fast-paced professional environment (Financial Services Preferred, but not required)
Proficiency in Windows 10/11, Windows Server (2016/2019/2022)
Familiarity with Microsoft Intune, Azure, and PowerShell strongly preferred
Solid understanding of networking concepts and information security best practices
Excellent problem-solving skills, strong communication abilities, and a collaborative, team-oriented mindset
Highly organized with strong attention to detail and a commitment to providing high-quality user support
Systems Engineer
New York, NY jobs
Sharp Decisions is looking for the following: Role: Systems Admin & Engineering - Vice President Role Description Install and configure servers, storage devices, network and telecommunications equipment. Perform upgrades and maintenance on all IT infrastructure hardware. Install and configure server operating systems, database software, file server structure and protocols, email servers, authentication servers, back-up systems and firewalls. Monitor system performance daily. Provide support on all IT infrastructure-related issues. Manage incident response for any major outages.
Role Objectives: Delivery
Install and configure servers, storage devices, network and telecommunications equipment as required by business need and outlined by the system architect's designs. Perform upgrades and maintenance on all IT infrastructure hardware. Install and configure server operating systems, database software, file server structure and protocols, email servers, authentication servers, back-up systems and firewalls. Configure domain and security policies and define access lists. Configure and deploy monitoring and reporting utilities, operations logs and access auditing tools. Monitor system performance daily and run reports on key metrics such as storage capacity, server resource utilization, connectivity and uptime, back-up performance and incident logs. Provide support on all IT infrastructure-related issues and escalation support for all other IT departments. Manage incident response for any major outages. Test disaster recovery systems and implement disaster response plans as needed.
Role Objectives: Interpersonal
Collaborate with IT systems architect to execute system designs. Provide reports and analysis on system performance and support planning of infrastructure improvements. Engage with other IT infrastructure teams to install hardware, configure devices and policies, and coordinate upgrades and maintenance. Coordinate incident response with all engineers and system owners during outages. Partner with application, web administration, database and other development focused teams to create infrastructure plans and review system performance. Work with disaster recovery, storage management and cyber security architects to implement systems, hardware and policy configurations to support their designs.
Role Objectives: Expertise
Demonstrate comprehensive understanding of IT infrastructure hardware, cloud service platforms, operating systems, virtual environments and configuration tools. Display expertise with application integration, update and change management, database design and structures, system deployments and automated reporting and monitoring tools. Exhibit knowledge of regulations regarding data security and retention, anti-virus tools and protection models, network security and access protocols, and firewall configurations. Show ability to manage and prioritize projects across multiple functions and divisions and manage incident response while providing clear communications to affected parties.
Qualifications and Skills
5+ years of experience in an infrastructure/end-user support role.
• Microsoft Active Directory (User, group and computer management)
• Microsoft Windows Desktop and Server Operating Systems - Windows registry, Group Policy, File/folder security concepts (NTFS/share permissions, etc.)
• Microsoft Office/SharePoint/Teams
• Microsoft SCCM
• Infoblox DNS/DHCP Management
• Performance Monitoring
• Enterprise Backups (Commvault/Rubrik)
• Enterprise storage (Pure)
• Citrix Workspace/XenDesktop Virtual Desktop
• VMware vSphere
• Core networking concepts (DNS, DHCP, etc.)
• Excellent customer service skills.
• Excellent verbal and written communication skills.
• High sense of urgency to support a trading floor.
• Able to follow directions, priorities, and guidance from management.
• Ability to multi-task and work on several projects at the same time.
• Strong ability to deliver on time.
• Ability to document process, requirements, and create test plans.
• Strong ability to translate business requirements into technical solutions.
• Strong team player.
Staff Engineer (CMT)
Pensacola, FL jobs
NOVA Engineering is hiring Staff Engineers to work on Construction Materials Testing & Special Inspection projects throughout the Pensacola, FL region. This opportunity will allow you to work on some of the most prestigious projects in the southeastern U.S. with much room for career advancement.
Primary responsibilities will include:
Staff/project engineering duties including data reduction, analysis & fieldwork for commercial, industrial, retail, government, office and residential projects (both Geotechnical Engineering and Construction Materials Testing/Inspection)
Assisting with project management & reporting
Field inspection, sampling & testing of soils/foundations, concrete, masonry, reinforcing steel, etc.
Report preparation
Client consultation and maintenance
Qualifications:
B.S. degree in Civil Engineering is required
E.I.T. is required
Recent Graduates Encouraged to Apply
0 - 2 years relative experience
Strong communication skills
Position entails approximately 80% fieldwork and 20% office with occasional travel
Check out our Perks:
In addition to our welcoming company culture and competitive compensation packages, our employees enjoy the below benefits:
Use of take-home Company Vehicle and gas card for daily travel to work sites
Comprehensive group medical insurance, including health, dental and vision
Opportunity for professional growth and advancement
Certification reimbursement
Paid time off
Company-observed paid holidays
Company paid life insurance for employee, spouse and children
Company paid short term disability coverage
Other supplemental benefit offerings including long-term disability, critical illness, accident and identity theft protection
401K retirement with company matching of 50% on the first 6% of employee contributions
Wellness program with incentives
Employee Assistance Program
NOVA is an Equal Opportunity Employer. All qualified candidates are encouraged to apply. NOVA does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, disability, national origin, ancestry, marital status, veteran status or any other characteristic protected by law.
Software Engineer
Raleigh, NC jobs
About the Role
We're looking for an experienced Senior Software Engineer to join our dynamic team focused on building scalable, high-performance SaaS applications. If you enjoy tackling complex challenges, mentoring others, and contributing to architectural decisions, this position is for you.
What You'll Do
Design, develop, and maintain robust applications using .NET Core, ASP.NET, and C#.
Collaborate with cross-functional teams in an Agile environment to deliver impactful features.
Support and refactor legacy applications, ensuring stability during modernization efforts.
Participate in architectural planning and advocate for best practices in coding, testing, and performance optimization.
Debug and enhance existing codebases while driving improvements in maintainability.
Mentor junior engineers through code reviews and technical guidance.
Stay current with emerging technologies and bring innovative ideas to the team.
Contribute to DevOps workflows, including CI/CD pipeline development and deployment strategies.
Qualifications
6+ years of professional software development experience, with strong expertise in .NET technologies.
Proficiency in .NET Core, ASP.NET MVC, Web API, and C#, ideally within cloud environments (AWS preferred; Azure/GCP acceptable).
Familiarity with front-end frameworks such as React and TypeScript.
Solid understanding of software design principles (SOLID) and modern architectural patterns.
Ability to work independently, learn new technologies quickly, and adapt to evolving requirements.
Bachelor's degree in computer science or related field, or equivalent experience.
Experience in SaaS platforms, multi-product ecosystems, and Agile methodologies is a plus.