Senior Data Scientist
Data scientist job in Birmingham, AL
We are seeking a Senior Data Scientist to lead data-driven innovation and deliver actionable insights that shape strategic decisions. In this role, you will collaborate with product, design, and engineering teams to develop advanced analytical models, optimize business processes, and build scalable data solutions. The work will be focused on automating the integration of disparate, unstructured data into a structured system-a process that was previously manual, time-consuming, and prone to errors. You will work with cutting-edge technologies across Python, AWS, Azure, and IBM Cloud (preferred) to design and deploy predictive models and machine learning algorithms in production environments.
Key Responsibilities:
Act as a senior data strategist, identifying and integrating new datasets into product capabilities.
Work will be geared towards use cases regarding automation opportunities where disparate data will be restructured into a system to improve accuracy in data extraction, resulting in improved operational efficiency and enhanced data quality.
Partner with engineering teams to build and enhance data products and pipelines.
Execute analytical experiments and develop predictive models to solve complex business challenges.
Collect, clean, and prepare structured and unstructured datasets for analysis.
Build and optimize algorithms for large-scale data mining, pattern recognition, and predictive modeling.
Analyze data for trends and actionable insights to inform business decisions.
Deploy analytical models to production in collaboration with software developers and ML engineers.
Stay current with emerging technologies, cloud platforms, and industry best practices.
Required Skills & Education:
7+ years of experience in data science or advanced analytics.
Strong expertise in Python and proficiency in SQL.
Hands-on experience with AWS and Azure; familiarity with IBM Cloud is a bonus.
Advanced knowledge of data mining, statistical analysis, predictive modeling, and machine learning techniques.
Ability to work effectively in a dynamic, research-oriented environment with multiple projects.
Bachelor's degree in Statistics, Applied Mathematics, Computer Science, or related field (or equivalent experience).
Excellent communication skills to present insights to technical and non-technical stakeholders.
Preferred Qualifications:
2+ years of project management experience.
Relevant professional certifications (AWS, Azure, Data Science, Machine Learning).
About Seneca Resources:
At Seneca Resources, we are more than just a staffing and consulting firm, we are a trusted career partner. With offices across the U.S. and clients ranging from Fortune 500 companies to government organizations, we provide opportunities that help professionals grow their careers while making an impact.
When you work with Seneca, you're choosing a company that invests in your success, celebrates your achievements, and connects you to meaningful work with leading organizations nationwide. We take the time to understand your goals and match you with roles that align with your skills and career path. Our consultants and contractors enjoy competitive pay, comprehensive health, dental, and vision coverage, 401(k) retirement plans, and the support of a dedicated team who will advocate for you every step of the way.
Senior Data Scientist
Data scientist job in Birmingham, AL
We're seeking a Contract-to-Hire Senior Data Scientist to lead and collaborate with a multidisciplinary team in designing and developing innovative analytical products and solutions using Machine Learning, NLP, and Deep Learning. This role is ideal for someone who thrives in ambiguity, enjoys solving complex problems, and can translate business needs into measurable outcomes.
What You'll Do
• Partner with business leaders to understand needs and define measurable goals
• Gather requirements, build project plans, manage deadlines, and communicate updates
• Analyze large structured and unstructured datasets
• Build, evaluate, implement, and maintain predictive models
• Present results to both technical and non-technical stakeholders
• Deploy models and monitor ongoing performance and data accuracy
• Contribute ideas, stay current with industry trends, and support team development
Lead-Level Opportunities Include:
• Driving data science strategy and overseeing project delivery
• Providing technical mentorship and leadership to the team
• Promoting innovation and exploring emerging tech, tools, and methodologies
What We're Looking For
• Bachelor's degree in Applied Mathematics, Statistics, Computer Science, Data Science, or related field
• 3-6 years of relevant experience (advanced degrees may reduce required experience)
• Strong skills in machine learning, statistical modeling, and data analysis
• Proficiency in Python or R
• Experience with large datasets, preprocessing, and feature engineering
• Prior management experience
• Experience with transfer learning
• Experience building and deploying deep learning solutions
• Strong communication skills and ability to present complex concepts clearly
• Experience in life insurance or a related domain is a plus
• Ability to independently manage projects end-to-end
Qualifications
• Master's or PhD
• Industry experience in similar roles
• Publications or patents in data science or ML
• Experience collaborating across technical and business teams
• Familiarity with software engineering best practices and version control
• Relevant certifications (AWS ML Specialty, Google Data Engineer, etc.)
Rooted in Birmingham. Focused on You.
We're a local recruiting firm based right here in Birmingham. We partner with top companies across the city-from large corporations to fast-growing startups-and we'd love to meet you for coffee to talk about your career goals. Whether you're actively searching or just exploring, we're here to guide you through the entire process- from resume tips to interview coaching.
At our clients' request, only individuals with required experience will be considered.
Please note - if you have recently submitted your resume to a PangeaTwo posting, your qualifications will be considered for other open opportunities.
Your resume will never be submitted to a client without your prior knowledge and consent to do so.
Electronic Data Interchange Consultant
Data scientist job in Birmingham, AL
DETAILS: EDI CONSULTANT /TRAINER
Title: EDI Consultant
Length: 3-6 months for first project and extensions from there, possible for multiple projects
Compensation: Hourly DOE
Location: Meadowbrook, AL (Birmingham) / can be remote, but need to visit on the front end a couple of weeks and as needed occasionally down the road.
OVERVIEW: EDI CONSULTANT /TRAINER
This individual will plan, develop, and implement the EDI operations and strategy roadmap for the organization train and mentor a small team.
RESPONSIBILITIES: EDI CONSULTANT /TRAINER
Manage Mapping and Administration for TrustedLink/OpenText /BizManager for iSeries/AS400
Mentor a small team of resources to assist in EDI operations.
Oversees the design, development, testing, deployment, and maintenance of the EDI systems, applications, and integrations - Must be strong with TrustedLink and BizManager for iSeries.
Develop and Document Specifications
Monitors and evaluates the EDI system's performance, availability, security, and compliance, and initiates corrective actions as needed.
Ensures that the EDI systems adhere to the industry standards, best practices, and regulatory requirements.
Resolves complex EDI issues and provides technical support and guidance to the users.
Establishes and maintains effective relationships with the internal and external stakeholders, such as business units, IT departments, vendors, and trading partners.
MINIMUM REQUIREMENTS: EDI CONSULTANT /TRAINER
Experience with AS400 / iSeries and RPG development and data files.
Strong experience with OpenText, TrustedLink, and BizManager for iSeries
2+ years leadership experience training and leading a small team
10+ years of experience in EDI systems development, implementation, and management.
Extensive knowledge and expertise in EDI standards, formats, protocols, and technologies, such as ANSI X12, EDIFACT, XML, AS2, FTP, VAN communication protocols etc.
Data Architect
Data scientist job in Orlando, FL
Data Architect
Duration: 6 Months
Responsible for enterprise-wide data design, balancing optimization of data access with batch loading and resource
utilization factors. Knowledgeable in most aspects of designing and constructing data architectures, operational data
stores, and data marts. Focuses on enterprise-wide data modelling and database design. Defines data architecture
standards, policies and procedures for the organization, structure, attributes and nomenclature of data elements, and
applies accepted data content standards to technology projects. Responsible for business analysis, data acquisition and
access analysis and design, Database Management Systems optimization, recovery strategy and load strategy design and implementation.
Essential Position Functions:
Evaluate and recommend data management processes.
Design, prepare and optimize data pipelines and workflows.
Lead implementations of secure, scalable, and reliable Azure solutions.
Observe and recommend how to monitor and optimize Azure for performance and cost-efficiency.
Endorse and foster security best practices, access controls, and compliance standards for all data lake resources.
Perform knowledge transfer about troubleshooting and documenting Azure architectures and solutions.
Skills required:
Deep understanding of Azure synapse Analytics, Azure Data Factory, and related Azure data tools
Lead implementations of secure, scalable, and reliable Azure solutions.
Observe and recommend how to monitor and optimize Azure for performance and cost efficiency.
Expertise in implementing Data Vault 2.0 methodologies using Wherescape automation software.
Proficient in designing and optimizing fact and dimension table models.
Demonstrated ability to design, develop, and maintain data pipelines and workflows.
Strong skills in formulating, reviewing, and optimizing SQL code.
Expertise in data collection, storage, accessibility, and quality improvement processes.
Endorse and foster security best practices, access controls, and compliance standards for all data lake resources.
Proven track record of delivering consumable data using information marts.
Excellent communication skills to effectively liaise with technical and non-technical team members.
Ability to document designs, procedures, and troubleshooting methods clearly.
Proficiency in Python or PowerShell preferred.
Bachelor's or master's degree in computer science, Information Systems, or other related field. Or equivalent work experience.
A minimum of 7 years of experience with large and complex database management systems.
Data Engineer (Mid & Senior)
Data scientist job in Huntsville, AL
Veteran-Owned Firm Seeking Data Engineers for an Onsite Assignment in Huntsville, AL
My name is Stephen Hrutka. I lead a Veteran-Owned management consulting firm in Washington, DC. We specialize in Technical and Cleared Recruiting for the Department of Defense (DoD), the Intelligence Community (IC), and other advanced defense agencies.
At HRUCKUS, we support fellow Veteran-Owned businesses by helping them recruit for positions across organizations such as the VA, SBA, HHS, DARPA, and other leading-edge R&D-focused defense agencies.
We seek to fill Data Engineer roles supporting the FBI in Huntsville, AL.
The ideal candidate will possess an active Top-Secret Security Clearance, and 5+ to 8+ years of experience in data engineering or database development. They should have strong hands-on experience with ETL tools (e.g., Informatica, Talend, Pentaho, AWS Glue, or custom Java ETL frameworks) and be proficient in SQL and at least one major RDBMS (Oracle or PostgreSQL).
If you're interested, I'll gladly provide more details about the role and discuss your qualifications further.
Thanks,
Stephen M Hrutka
Principal Consultant
HRUCKUS LLC
Executive Summary: HRUCKUS is seeking a Mid-Level and Senior-Level Data Engineers with Top-Secret Security Clearance for a role supporting the FBI in Huntsville, AL.
Job Description: We are seeking Data Engineers (Senior and Mid-Level) to support secure, mission-critical data environments within a classified cloud infrastructure. These roles are fully onsite in Huntsville, AL and require an active Top Secret clearance.
The ideal candidates will have strong experience with ETL development, data migration, Java-based data pipelines, and relational/NoSQL databases (Oracle, PostgreSQL, MongoDB), along with exposure to AWS cloud services and Agile/Scrum methodologies.
Responsibilities:
Design, develop, and maintain ETL workflows to extract, transform, and load large-scale structured and unstructured datasets.
Develop data migration solutions between legacy and modern systems using SQL, Java, and cloud-native tools.
Implement data integration frameworks leveraging AWS services such as Glue, Lambda, S3, RDS, Redshift, and Kinesis.
Develop automation scripts using Python, Shell, or Bash for deployment, data validation, and maintenance tasks.
Maintain and enhance data pipelines for real-time and batch data processing.
Support data quality, metadata management, and governance activities.
Participate in Agile/Scrum sprints, contributing to design, code reviews, testing, and documentation.
Troubleshoot and resolve data-related issues across on-premises and AWS environments.
Qualifications:
Active TOP SECRET clearance
Bachelor's degree in Computer Science, IT, or related field
Mid-Level: 5+ years of professional experience in data engineering or database development.
Senior-Level: 8+ years of professional experience in data engineering or database development.
Strong hands-on experience with ETL tools (e.g., Informatica, Talend, Pentaho, AWS Glue, or custom Java ETL frameworks).
Proficiency in SQL and at least one major RDBMS (Oracle or PostgreSQL).
Experience with data migration projects and data quality validation.
Proficient in Java or Python for data processing and automation.
Experience working with cloud technologies, preferably AWS (RDS, S3, Lambda, Redshift, Glue).
Working knowledge of Linux/Unix environments and shell scripting.
Experience in an Agile/Scrum development environment.
Excellent problem-solving, analytical, and communication skills.
Details:
Job Title: Mid-Level and Senior-level Data Engineer
Location: Redstone Arsenal, Huntsville, AL 35898
Security Clearance: Top-Secret Clearance
Salary: Up to $150,000 per year (based on experience)
Job Type: Full-time, Onsite
Benefits:
Paid Time Off (PTO): 3 weeks of PTO (including sick leave). Unused PTO is paid out at the end of the year.
Holidays: 2 floating holidays and eight public holidays per year.
Health & Dental Insurance: The company covers 50% of employee health and dental insurance (dependents may be added at an extra cost). Coverage becomes effective after 30 days.
Life Insurance: Standard Short-Term Disability (STD), Long-Term Disability (LTD), and life insurance at no cost to full-time employees.
401(k) Program: Eligible after 90 days with a 4% company match and immediate vesting.
Profit Sharing: Employees can participate in the company's profit-sharing program without requiring personal contributions.
Commuting and Parking: No reimbursement for commuting or parking expenses.
Financial Data Analyst
Data scientist job in Palm Beach, FL
Heirloom Fair Legal is a specialist legal financer, providing financing to law firms, claimants, and their service providers to promote access to justice for consumer and small-business legal claims in the UK. Based in London and with offices in Manchester and Warrington in England, we are expanding to the US with a new Palm Beach, FL office.
Role Description
This is a full-time hybrid role for a Financial Data Analyst, located in Palm Beach, FL, with opportunities for remote work. The Financial Data Analyst is responsible for our portfolio and reporting system. This tracks our financings to or via approximately 20 law firms or service providers, tracking approximately 350,000 pieces of underlying collateral. We are developing a new system based on SQL and Python with an as-yet unselected ETL/Business Intelligence layer. This role will be responsible for leading the buildout / design of this new system, migrating data over, and leading the system for data cleansing and ingestion. It will also be responsible for generating regular reporting for HFL's Investment Committee, external investors and other stakeholders.
Qualifications
Strong Analytical Skills and proficiency in Data Analytics
Experience with SQL, Python and ETL or Business Intelligence tools
Proficiency in generating reports using data visualization and similar tools
Ability to work independently, balancing multiple priorities and to deadline
Strong problem-solving skills and critical thinking capabilities
Bachelor's degree in Finance, Economics, Data Analytics, or a related field
Prior experience in the legal finance or investment industry is a plus
Data Architect
Data scientist job in Orlando, FL
(Orlando, FL)
Business Challenge
The company is in the midst of an AI transformation, creating exciting opportunities for growth. At the same time, they are leading a Salesforce modernization and integrating the systems and data of their recent acquisition.
To support these initiatives, they are bringing in a Senior Data Architect/Engineer to establish enterprise standards for application and data architecture, partnering closely with the Solutions Architect and Tech Leads.
Role Overview
The Senior Data Architect/Engineer leads the design, development, and evolution of enterprise data architecture, while contributing directly to the delivery of robust, scalable solutions. This position blends strategy and hands-on engineering, requiring deep expertise in modern data platforms, pipeline development, and cloud-native architecture.
You will:
Define architectural standards and best practices.
Evaluate and implement new tools.
Guide enterprise data initiatives.
Partner with data product teams, engineers, and business stakeholders to build platforms supporting analytics, reporting, and AI/ML workloads.
Day-to-Day Responsibilities
Lead the design and documentation of scalable data frameworks: data lakes, warehouses, streaming architectures, and Azure-native data platforms.
Build and optimize secure, high-performing ETL/ELT pipelines, data APIs, and data models.
Develop solutions that support analytics, advanced reporting, and AI/ML use cases.
Recommend and standardize modern data tools, frameworks, and architectural practices.
Mentor and guide team members, collaborating across business, IT, and architecture groups.
Partner with governance teams to ensure data quality, lineage, security, and stewardship.
Desired Skills & Experience
10+ years of progressive experience in Data Engineering and Architecture.
Strong leadership experience, including mentoring small distributed teams (currently 4 people: 2 onshore, 2 offshore; team growing to 6).
Deep knowledge of Azure ecosystem (Data Lake, Synapse, SQL DB, Data Factory, Databricks).
Proven expertise with ETL pipelines (including 3rd-party/vendor integrations).
Strong SQL and data modeling skills; familiarity with star/snowflake schemas and other approaches.
Hands-on experience creating Data APIs.
Solid understanding of metadata management, governance, security, and data lineage.
Programming experience with SQL, Python, Spark.
Familiarity with containerized compute/orchestration frameworks (Docker, Kubernetes) is a plus.
Experience with Salesforce data models, MDM tools, and streaming platforms (Kafka, Event Hub) is preferred.
Excellent problem-solving, communication, and leadership skills.
Education:
Bachelor's degree in Computer Science, Information Systems, or related field (Master's preferred).
Azure certifications in Data Engineering or Solution Architecture strongly preferred.
Essential Duties & Time Allocation
Data Architecture Leadership - Define enterprise-wide strategies and frameworks (35%)
Engineering & Delivery - Build and optimize ETL/ELT pipelines, APIs, and models (30%)
Tooling & Standards - Evaluate new tools and support adoption of modern practices (15%)
Mentorship & Collaboration - Mentor engineers and align stakeholders (10%)
Governance & Quality - Embed stewardship, lineage, and security into architecture (10%)
Sr Electronic Data Interchange Coordinator
Data scientist job in Tampa, FL
On-Site: Locations - Tampa FL, Arcadia WI
(GC/USC Only)
Senior EDI Coordinator
Senior EDI Coordinators create new and update existing EDI maps to support the movement of thousands of transactions each day, setup and maintain EDI trading partners, setup and maintain EDI communication configurations, and provide support for a large assortment of EDI transactions with variety of trading partners.
Primary Job Functions:
Monitor inbound and outbound transaction processing to ensure successful delivery. Take corrective action on those transactions that are not successful.
Develop and modify EDI translation maps according to Business Requirements Documents and EDI Specifications.
Perform unit testing and coordinate integrated testing with internal and external parties.
Perform map reviews to ensure new maps and map changes comply with requirements and standards.
Prepare, maintain, and review documentation. This includes Mapping Documents, Standard Operating Procedures, and System Documentation.
Perform Trading Partner setup, configuration, and administrative activities.
Analyze and troubleshoot connectivity, mapping, and data issues.
Provide support to our business partners and external parties.
Participate in an after-hours on-call rotation.
Setup and maintain EDI communication channels.
Provide coaching and mentoring to EDI Coordinators.
Suggest EDI best practices and opportunities for improvement.
Maintain and update AS2 Certificates.
Deploy map changes to production.
Perform EDI system maintenance and upgrades.
Job Qualifications:
Education:
Bachelor's Degree in Information Systems, Computer Science, or other related fields; or equivalent combination of education and experience, Required
Experience:
5+ years of practical EDI mapping experience, with emphasis in ANSI X.12, Required
Experience working with XML and JSON transactions, Preferred
Experience working with AS2, VAN, and sFTP communications, Preferred
Experience working with AS2 Certificates, Preferred
Experience with Azure DevOps Agile/Scrum platform, Preferred
Experience in large, complex enterprise environments, Preferred
Knowledge, Skills and Abilities:
Advanced analytical and problem-solving skills
Strong attention to detail
Excellent written and verbal communication skills
Excellent client facing and interpersonal skills
Effective time management and organizational skills
Work independently as well as in a team environment
Handle multiple projects simultaneously within established time constraints
Perform under strong demands in a fast-paced environment
Display empathy, understanding and patience with employees and external customers
Respond professionally in situations with difficult employee/vendor/customer issues or inquiries
Working knowledge of Continuous Improvement methodologies
Strong working knowledge of Microsoft Office Suite
Senior Data Engineer
Data scientist job in Saint Petersburg, FL
Sr. Data Engineer
CLIENT: Fortune 150 Company; Financial Services
SUMMARY DESCRIPTION:
The Data Engineer will serve in a strategic role designing and managing the infrastructure that supports data storage, transforming, processing, and retrieval enabling efficient data analysis and decision-making within the organization. This position is critical as part of the Database and Analytics team responsible for design, development, and implementation of complex enterprise-level data integration and consumption solutions. It requires a highly technical, self-motivated senior engineer who will work with analysts, architects, and systems engineers to develop solutions based on functional and technical specifications that meet quality and performance requirements.
Must have Experience with Microsoft Fabric.
PRIMARY DUTIES AND RESPONSIBILITIES:
Utilize experience in ETL tools, with at least 5 years dedicated to Azure Data Factory (ADF), to design, code, implement, and manage multiple parallel data pipelines. Experience with Microsoft Fabric, Pipelines, Mirroring, and Data Flows Gen 2 usage is required.
Apply a deep understanding of data warehousing concepts, including data modeling techniques like star and snowflake schemas, SCD Type 2, Change Data Feeds, Change Data Capture. Also demonstrates hands-on experience with Data Lake Gen 2, Delta Lake, Delta Parquet files, JSON files, big data storage layers, optimize and maintain big data storage using Partitioning, V-Order, Optimize, Vacuum and other techniques.
Design and optimize medallion data models, warehouses, architectures, schemas, indexing, and partitioning strategies.
Collaborate with Business Insights and Analytics teams to understand data requirements and optimize storage for analytical queries.
Modernize databases and data warehouses and prepare them for analysis, managing for optimal performance.
Design, build, manage, and optimize enterprise data pipelines ensuring efficient data flow, data integrity, and data quality throughout the process.
Automate efficient data acquisition, transformation, and integration from a variety of data sources including databases, APIs, message queues, data streams, etc.
Competently performs advanced data tasks with minimal supervision, including architecting advanced data solutions, leading and coaching others, and effectively partnering with stakeholders.
Interface with other technical and non-technical departments and outside vendors on assigned projects.
Under the direction of the IT Management, will establish standards, policies and procedures pertaining to data governance, database/data warehouse management, metadata management, security, optimization, and utilization.
Ensure data security and privacy by implementing access controls, encryption, and anonymization techniques as per data governance and compliance policies.
Expertise in managing schema drift within ETL processes, ensuring robust and adaptable data integration solutions.
Document data pipelines, processes, and architectural designs for future reference and knowledge sharing.
Stay informed of latest trends and technologies in the data engineering field, and evaluate and adopt new tools, frameworks, and platforms (like Microsoft Fabric) to enhance data processing and storage capabilities.
When necessary, implement and document schema modifications made to legacy production environment.
Perform any other function required by IT Management for the successful operation of all IT and data services provided to our clients.
Available nights and weekends as needed for system changes and rollouts.
EDUCATION AND EXPERIENCE REQUIREMENTS:
Bachelor's or Master's degree in computer science, information systems, applied mathematics, or closely related field.
Minimum of ten (10) years full time employment experience as a data engineer, data architect, or equivalent required.
Must have Experience with Microsoft Fabric
SKILLS:
Experience in working with large, heterogeneous datasets in building and optimizing data pipelines, pipeline architectures, and integrated datasets using traditional and modern data integration technologies (such as ETL, ELT, MPP, data replication, change data captures, message-oriented data movement, API design, stream data integration and data virtualization)
Experience working with cloud data engineering stacks (specifically Azure and Microsoft Fabric), Data Lake, Synapse, Azure Data Factory, Databricks, Informatica, Data Explorer, etc.
Strong, in-depth understanding of database architecture, storage, and administration utilizing Azure stack.
Deep understanding of Data architectural approaches, Data Engineering Solutions, Software Engineering principles and best practices.
Working knowledge and experience with modern BI and ETL tools (Power BI, Power Automate, ADF, SSIS, etc.)
Experience utilizing data storage solutions including Azure Blob storage, ADLS Gen 2.
Solid understanding of relational and dimensional database principles and best practices in a client/server, thin-client, and cloud computing environment.
Advanced working knowledge of TSQL and SQL Server, transactions, error handling, security and maintenance with experience writing complex stored procedures, views, and user-defined functions as well as complex functions, dynamic SQL, partitions, CDC, CDF, etc.
Experience with .net scripting and understanding of API integration in a service-oriented architecture.
Knowledge of reporting tools, query language, semantic models with specific experience with Power BI.
Understanding of and experience with agile methodology.
PowerShell scripting experience desired.
Experience with Service Bus, Azure Functions, Event Grids, Event Hubs, Kafka would be beneficial.
Experience working in Agile methodology.
Working Conditions:
Available to work evenings and/or weekends (as required).
Workdays and hours are Monday through Friday 8:30 am to 5:30 pm ET.
Production Data Coordinator
Data scientist job in Sarasota, FL
eComSystems is seeking a hyper-detail-oriented Production Coordinator responsible for serving as the team's dedicated data and system architect. This specialized role ensures the seamless integration of all client data into the AdStudio platform. The Coordinator takes the initial brief from the Project Manager and transforms it into a 100% clean, ready-to-design file shell, enabling the Production Artists to focus purely on creative execution.
Key Responsibilities
Data Mastering & Preparation: Own the end-to-end management, manipulation, and upload of all data files (primarily using Microsoft Excel) required for weekly circular production.
System Architecture: Utilize coding tools (e.g., VBA) and advanced Excel functions to cleanse, format, and validate large datasets to ensure compliance with AdStudio's strict import requirements.
AdStudio Shell Building: Execute data merges, image uploads, and template application to create the production-ready ad files ("shells") that are handed off to the Production Artists.
Process Integrity: Perform strict internal quality checks on all data imports and shell builds to eliminate errors before they reach the design phase (critical for the high volume of weekly ads).
Asset Management: Coordinate with the Project Manager to ensure all assets are sourced, named correctly, and available in the shared directories for seamless integration.
Workflow Handoff: Formally update the project tracking system to signal the clean handoff of files to the Production Artist team.
Qualifications
3+ years of experience in a high-volume production environment, focusing specifically on data manipulation, systems integration, or production coordination.
Expert proficiency in Microsoft Excel is mandatory, including mastery of complex formulas, pivot tables, and data validation techniques. Familiarity with VBA or other scripting languages is highly desirable.
Experience with proprietary software, content management systems (CMS), or data-driven design platforms (AdStudio or similar) is strongly preferred.
Demonstrated high attention to detail, precision, and a proactive approach to troubleshooting data errors.
Exceptional organizational skills with a strong ability to adhere to strict procedural guidelines.
About eCom
eComSystems (“eCom”) provides proprietary ad tech solutions for well-known national brands, clients and retailers, enabling them to do business better, improve efficiencies, and impact the bottom line. For more than 25 years, eCom has been the marketing technology platform of major distributors, retailers, and wholesalers across the US. With a focus on grocery, hardware, building materials, pharmacy, food distribution, and sporting goods channels, eCom's patented omnichannel platform creates, distributes, and manages national promotions through digital, web, social, and print media.
Job Type: Full-time
Benefits:
401(k)
Dental insurance
Health insurance
Life insurance
Paid time off
Vision insurance
Ability to Commute:
Sarasota, FL 34240 (Required)
Job Type: Full-time
Pay: $50,000.00 - $60,000.00 per year
Senior Python Data Engineer (Banking)
Data scientist job in Miami, FL
ITTConnect is seeking a Sr Data Engineer with experience in Banking / Financial Services for a direct-hire full time position with a client that is a large financial institution.
is hybrid
Requirements
10+ years of experience with Software Development
3+ years of experience with Data Engineering
Hands-on Python coding experience, with knowledge in DataOps and on-premise environments
Strong understanding of Python and its applicability within data tools, including libraries such as pandas and related
Airflow: DAG creation, workflow maintenance, integration with dbt-core
DBT: Development and maintenance of models and macros in dbt-core (not dbt Cloud). Experience migrating SQL code into dbt
Git: Layered deployment structure aligned with the Infrastructure team
SQL Server: Advanced knowledge of SQL Server, including tuning, performance evaluation, optimization
Medallion Architecture: Understanding of Medallion architecture operations. Implementation skills are not required, but operational familiarity is expected
Highly desirable previous experience working for Financial Services / Banks
Highly desirable fluency in Portuguese or Spanish
Bachelor's degree in Information Technology or related field
Lead Data Engineer
Data scientist job in Tampa, FL
A leading Investment Management Firm is looking to bring on a Lead Data Engineer to join its team in Tampa, Denver, Memphis, or Southfield. This is an excellent chance to work alongside industry leaders while getting to be both hands on and helping lead the team.
Key Responsibilities
Project Oversight: Direct end-to-end software development activities, from initial requirements through deployment, ensuring projects meet deadlines and quality standards.
Database Engineering: Architect and refine SQL queries, stored procedures, and schema designs to maximize efficiency and scalability within Oracle environments.
Performance Tuning: Evaluate system performance and apply strategies to enhance data storage and retrieval processes.
Data Processing: Utilize tools like Pandas and Spark for data wrangling, transformation, and analysis.
Python Solutions: Develop and maintain Python-based applications and automation workflows.
Pipeline Automation: Implement and manage continuous integration and delivery pipelines using Jenkins and similar technologies to optimize build, test, and release cycles.
Team Development: Guide and support junior engineers, promoting collaboration and technical growth.
Technical Documentation: Create and maintain comprehensive documentation for all development initiatives.
Core Skills
Experience: Over a decade in software engineering, with deep expertise in Python and Oracle database systems.
Technical Knowledge: Strong command of SQL, Oracle, Python, Spark, Jenkins, Kubernetes, Pandas, and modern CI/CD practices.
Optimization Expertise: Skilled in database tuning and applying best practices for performance.
Leadership Ability: Proven track record in managing teams and delivering complex projects.
Analytical Strength: Exceptional problem-solving capabilities with a data-centric mindset.
Communication: Clear and effective written and verbal communication skills.
Education: Bachelor's degree in Computer Science, Engineering, or equivalent professional experience.
Preferred Qualifications
Certifications: Professional credentials in Oracle, Python, Kubernetes, or CI/CD technologies.
Agile Background: Hands-on experience with Agile or Scrum frameworks.
Cloud Platforms: Familiarity with AWS, Azure, or Google Cloud services.
Data Modeling
Data scientist job in Melbourne, FL
Must Have Technical/Functional Skills
• 5+ years of experience in data modeling, data architecture, or a similar role
• Proficiency in SQL and experience with relational databases such as Oracle, SQL Server, or PostgreSQL
• Experience with data modeling tools such as Erwin, IBM Infosphere Data Architect, or similar
• Ability to communicate complex concepts clearly to diverse audiences
Roles & Responsibilities
• Design and develop conceptual, logical, and physical data models that support both operational and analytical needs
• Collaborate with business stakeholders to gather requirements and translate them into scalable data models
• Perform data profiling and analysis to understand data quality issues and identify opportunities for improvement
• Implement best practices for data modeling, including normalization, denormalization, and indexing strategies
• Lead data architecture discussions and present data modeling solutions to technical and non-technical audiences
• Mentor and guide junior data modelers and data architects within the team
• Continuously evaluate data modeling tools and techniques to enhance team efficiency and productivity
Base Salary Range: $100,000 - $150,000 per annum
TCS Employee Benefits Summary:
Discretionary Annual Incentive.
Comprehensive Medical Coverage: Medical & Health, Dental & Vision, Disability Planning & Insurance, Pet Insurance Plans.
Family Support: Maternal & Parental Leaves.
Insurance Options: Auto & Home Insurance, Identity Theft Protection.
Convenience & Professional Growth: Commuter Benefits & Certification & Training Reimbursement.
Time Off: Vacation, Time Off, Sick Leave & Holidays.
Legal & Financial Assistance: Legal Assistance, 401K Plan, Performance Bonus, College Fund, Student Loan Refinancing.
Data Engineer
Data scientist job in Palm Beach Gardens, FL
Flybridge Staffing is currently searching for a Data Engineer for a client located in the Palm Beach Gardens area. This is a direct-hire position that will work off a hybrid schedule of 2 days remote. This person will design systems that supply high-performance datasets for advanced analytics.
Experience:
BA degree and 5+ years of Data Engineering experience
Strong experience building ETL data pipelines for on-premises SQL Server 2017 or newer
Deep understanding of the development of data pipelines with either SSIS or Python
Broad experience with SQL Server, including Columnstore, etc.
Extensive experience using SSMS and T-SQL to create and maintain SQL Server tables, views, functions, stored procedures, and user-defined table types.
Experience with data modeling indexes, Temporal tables, CLR, and Service Broker.
Experience in partitioning tables and indexes, and performance improvement with Query Analyzer
Experience writing C#, PowerShell, and Python.
Experience with source control integration with GitHub, BitBucket, and Azure DevOps.
Experience working in an Agile and Kanban SDLC.
Experience with cloud-based data management solutions such as Snowflake, Redshift.
Experience with Python programming is a plus. Libraries such as Pandas, Numpy, csv, Traceback, JSON, PyODBC, Math-Are nice to have.
Experience writing design documentation such as ERDs, Data Flow Diagrams, and Process Flow Diagrams.
Experience with open-source database engines such as Clickhouse, ArcticDB, and PostGreSQL is a plus.
Responsibilities:
Collaborate effectively with Stakeholders, Project Managers, Software Engineers, Data Analysts, QA Analysts, DBAs, and Data Engineers.
Build and maintain data pipelines based on functional and non-functional requirements.
Proactively seek out information and overcome obstacles to deliver projects efficiently.
Ensure that data pipelines incorporate best practices related to performance, scaling, extensibility, fault tolerance, instrumentation, and maintainability.
Ensure that data pipelines are kept simple and not overly engineered.
Produce and maintain design and operational documentation.
Analyze complex data problems and engineer elegant solutions.
****NO SPONSORSHIP AVAILABLE**** US Citizen, GC, EAD only please. If your background aligns with the above details and you would like to learn more, please submit your resume to jobs@flybridgestaffing.com or on our website, www.flybridgestaffing.com and one of our recruiters will be in touch with you ASAP.
Follow us on LinkedIn to keep up with all our latest job openings and referral program.
GCP Data Architect with 14+ years (Day 1 onsite)
Data scientist job in Sunrise, FL
12-14 years of overall IT experience with expertise in Data landscape - Data Warehouse, Data lake etc.
Hands on experience in Big Data and Hadoop ecosystem; Strong skills in SQL, Python or Spark
Proficient in Data Warehousing concepts and Customer Data Management (Customer 360)
Experience in GCP platform - Dataflow, Dataproc, Kubernetes containers etc.
Expertise in deep Data exploration and Data analysis
Excellent communication and inter personal skills
Sr. Data Engineer (SQL+Python+AWS)
Data scientist job in Saint Petersburg, FL
looking for a Sr. Data Engineer (SQL+Python+AWS) to work on a 12+ Months, Contract (potential Extension or may Convert to Full-time) = Hybrid at St. Petersburg, FL 33716 with a Direct Financial Client = only on W2 for US Citizen or Green Card Holders.
Notes from the Hiring Manager:
• Setting up Python environments and data structures to support the Data Science/ML team.
• No prior Data Science or Machine Learning experience required.
• Role involves building new data pipelines and managing file-loading connections.
• Strong SQL skills are essential.
• Contract-to-hire position.
• Hybrid role based in St. Pete, FL (33716) only.
Duties:
This role is building and maintaining data pipelines that connect Oracle-based source systems to AWS cloud environments, to provide well-structured data for analysis and machine learning in AWS SageMaker.
It includes working closely with data scientists to deliver scalable data workflows as a foundation for predictive modeling and analytics.
• Develop and maintain data pipelines to extract, transform, and load data from Oracle databases and other systems into AWS environments (S3, Redshift, Glue, etc.).
• Collaborate with data scientists to ensure data is prepared, cleaned, and optimized for SageMaker-based machine learning workloads.
• Implement and manage data ingestion frameworks, including batch and streaming pipelines.
• Automate and schedule data workflows using AWS Glue, Step Functions, or Airflow.
• Develop and maintain data models, schemas, and cataloging processes for discoverability and consistency.
• Optimize data processes for performance and cost efficiency.
• Implement data quality checks, validation, and governance standards.
• Work with DevOps and security teams to comply with RJ standards.
Skills:
Required:
• Strong proficiency with SQL and hands-on experience working with Oracle databases.
• Experience designing and implementing ETL/ELT pipelines and data workflows.
• Hands-on experience with AWS data services, such as S3, Glue, Redshift, Lambda, and IAM.
• Proficiency in Python for data engineering (pandas, boto3, pyodbc, etc.).
• Solid understanding of data modeling, relational databases, and schema design.
• Familiarity with version control, CI/CD, and automation practices.
• Ability to collaborate with data scientists to align data structures with model and analytics requirements
Preferred:
• Experience integrating data for use in AWS SageMaker or other ML platforms.
• Exposure to MLOps or ML pipeline orchestration.
• Familiarity with data cataloging and governance tools (AWS Glue Catalog, Lake Formation).
• Knowledge of data warehouse design patterns and best practices.
• Experience with data orchestration tools (e.g., Apache Airflow, Step Functions).
• Working knowledge of Java is a plus.
Education:
B.S. in Computer Science, MIS or related degree and a minimum of five (5) years of related experience or combination of education, training and experience.
Data Architect
Data scientist job in Sunrise, FL
JD:
14+ years of overall IT experience with expertise in Data landscape - Data Warehouse, Data lake etc.
Hands on experience in Big Data and Hadoop ecosystem; Strong skills in SQL, Python or Spark
Proficient in Data Warehousing concepts and Customer Data Management (Customer 360)
Experience in GCP platform - Dataflow, Dataproc, Kubernetes containers etc.
Expertise in deep Data exploration and Data analysis
Excellent communication and inter personal skills
Claims Data Engineer
Data scientist job in Plantation, FL
NationsBenefits is recognized as one of the fastest growing companies in America and a Healthcare Fintech provider of supplemental benefits, flex cards, and member engagement solutions. We partner with managed care organizations to provide innovative healthcare solutions that drive growth, improve outcomes, reduce costs, and bring value to their members.
Through our comprehensive suite of innovative supplemental benefits, fintech payment platforms, and member engagement solutions, we help health plans deliver high-quality benefits to their members that address the social determinants of health and improve member health outcomes and satisfaction.
Our compliance-focused infrastructure, proprietary technology systems, and premier service delivery model allow our health plan partners to deliver high-quality, value-based care to millions of members.
We offer a fulfilling work environment that attracts top talent and encourages all associates to contribute to delivering premier service to internal and external customers alike. Our goal is to transform the healthcare industry for the better! We provide career advancement opportunities from within the organization across multiple locations in the US, South America, and India.
Position Summary:
We are seeking a seasoned EDI 837 Claims Data Engineer to design, develop, and maintain data pipelines that process healthcare claims in compliance with HIPAA and ANSI X12 standards. This role requires deep expertise in Electronic Data Interchange (EDI), particularly the 837-transaction set, and will be pivotal in ensuring accurate, timely, and secure claims data exchange across payers, providers, clearinghouses, state agencies, and CMS.
Key Responsibilities
EDI Development & Integration
Design, build, and maintain pipelines for processing 837 healthcare claim transactions.
Implement and support EDI workflows across multiple trading partners.
Ensure compliance with HIPAA regulations and ANSI X12 standards.
Data Engineering
Develop ETL processes to transform, validate, and load claims data into enterprise data warehouses.
Optimize data flows for scalability, reliability, and performance.
Collaborate with analysts and stakeholders to ensure claims data accuracy.
Write and optimize SQL queries, stored procedures, and scripts for validation and reporting.
Monitoring & Troubleshooting
Monitor EDI transactions for errors, rejections, and compliance issues.
Troubleshoot and resolve data mapping, translation, and connectivity problems.
Perform root cause analysis and implement corrective actions.
Collaboration
Work closely with business analysts, QA teams, and IT operations to support claims processing.
Partner with healthcare domain experts to align technical solutions with business needs.
Required Skills & Qualifications
5+ years of experience in healthcare data engineering or claims integration.
Strong expertise with EDI 837 transactions and healthcare claims processing.
Proven experience with Medicaid and Medicare data exchanges between state agencies and CMS.
Hands-on experience with Databricks, SSIS, and SQL Server.
Knowledge of HIPAA compliance, CMS reporting requirements, and interoperability standards.
Strong problem-solving skills and ability to work in cross-functional teams.
Excellent communication and documentation skills.
Preferred Skills
Experience with Azure cloud platforms
Familiarity with other EDI transactions (835, 270/271, 276/277).
Exposure to data governance frameworks and security best practices.
Background in data warehousing and healthcare analytics.
Senior Data Engineer
Data scientist job in Tampa, FL
Company:
Toorak Capital Partners is an integrated correspondent lending and table funding platform that acquires business purpose residential, multifamily and mixed-use loans throughout the U.S. and the United Kingdom. Headquartered in Tampa, FL., Toorak Capital Partners acquires these loans directly from a network of private lenders on a correspondent basis.
Summary:
The role of the Lead Data Engineer is to develop, implement, for building high performance, scalable data solution to support Toorak's Data Strategy
Lead Data architecture for Toorak Capital.
Lead efforts to create API framework to use data across customer facing and back office applications.
Establish consistent data standards, reference architectures, patterns, and practices across the organization for both OLTP and OLAP (Data warehouse, Data Lake house) MDM and AI / ML technologies
Lead sourcing and synthesis of Data Standardization and Semantics discovery efforts turning insights into actionable strategies that will define the priorities for the team and rally stakeholders to the vision
Lead the data integration and mapping efforts to harmonize data.
Champion standards, guidelines, and direction for ontology, data modeling, semantics and Data Standardization in general at Toorak.
Lead strategies and design solutions for a wide variety of use cases like Data Migration (end-to-end ETL process), database optimization, and data architectural solutions for Analytics Data Projects
Required Skills:
Designing and maintaining the data models, including conceptual, logical, and physical data models
5+ years of experience using NoSQL systems like MongoDB, DynamoDB and Relational SQL Database systems (PostgreSQL) and Athena
5+ years of experience on Data Pipeline development, ETL and processing of structured and unstructured data
5+ years of experience in large scale real-time stream processing using Apache Flink or Apache Spark with messaging infrastructure like Kafka/Pulsar
Proficiency in using data management tools and platforms, such as data cataloging software, data quality tools), and data governance platforms
Experience with Big Query, SQL Mesh(or similar SQL-based cloud platform).
Knowledge of cloud platforms and technologies such as Google Cloud Platform, Amazon Web Services.
Strong SQL skills.
Experience with API development and frameworks.
Knowledge in designing solutions with Data Quality, Data Lineage, and Data Catalogs
Strong background in Data Science, Machine Learning, NLP, Text processing of large data sets
Experience in one or more of the following: Dataiku, DataRobot, Databricks, UiPath would be nice to have.
Using version control systems (e.g., Git) to manage changes to data governance policies, procedures, and documentation
Ability to rapidly comprehend changes to key business processes and the impact on overall Data framework.
Flexibility to adjust to multiple demands, shifting priorities, ambiguity, and rapid change.
Advanced analytical skills.
High level of organization and attention to detail.
Self-starter attitude with the ability to work independently.
Knowledge of legal, compliance, and regulatory issues impacting data.
Experience in finance preferred.
ML Data Engineer #978695
Data scientist job in Seffner, FL
Job Title: Data Engineer - AI/ML Pipelines
Work Model: Hybrid
Duration: CTH
The Data Engineer - AI/ML Pipelines plays a key role in designing, building, and maintaining scalable data infrastructure that powers analytics and machine learning initiatives. This position focuses on developing production-grade data pipelines that support end-to-end ML workflows-from data ingestion and transformation to feature engineering, model deployment, and monitoring.
The ideal candidate has hands-on experience working with operational systems such as Warehouse Management Systems (WMS) or ERP platforms, and is comfortable partnering closely with data scientists, ML engineers, and operational stakeholders to deliver high-quality, ML-ready datasets.
Key Responsibilities
ML-Focused Data Engineering
Build, optimize, and maintain data pipelines specifically designed for machine learning workflows.
Collaborate with data scientists to develop feature sets, implement data versioning, and support model training, evaluation, and retraining cycles.
Participate in initiatives involving feature stores, model input validation, and monitoring of data quality feeding ML systems.
Data Integration from Operational Systems
Ingest, normalize, and transform data from WMS, ERP, telemetry, and other operational data sources.
Model and enhance operational datasets to support real-time analytics and predictive modeling use cases.
Pipeline Automation & Orchestration
Build automated, reliable, and scalable pipelines using tools such as Azure Data Factory, Airflow, or Databricks Workflows.
Ensure data availability, accuracy, and timeliness across both batch and streaming systems.
Data Governance & Quality
Implement validation frameworks, anomaly detection, and reconciliation processes to ensure high-quality ML inputs.
Support metadata management, lineage tracking, and documentation of governed, auditable data flows.
Cross-Functional Collaboration
Work closely with data scientists, ML engineers, software engineers, and business teams to gather requirements and deliver ML-ready datasets.
Translate modeling and analytics needs into efficient, scalable data architecture solutions.
Documentation & Mentorship
Document data flows, data mappings, and pipeline logic in a clear, reproducible format.
Provide guidance and mentorship to junior engineers and analysts on ML-focused data engineering best practices.
Required Qualifications
Technical Skills
Strong experience building ML-focused data pipelines, including feature engineering and model lifecycle support.
Proficiency in Python, SQL, and modern data transformation tools (dbt, Spark, Delta Lake, or similar).
Solid understanding of orchestrators and cloud data platforms (Azure, Databricks, etc.).
Familiarity with ML operations tools such as MLflow, TFX, or equivalent frameworks.
Hands-on experience working with WMS or operational/logistics data.
Experience
5+ years in data engineering, with at least 2 years directly supporting AI/ML applications or teams.
Experience designing and maintaining production-grade pipelines in cloud environments.
Proven ability to collaborate with data scientists and translate ML requirements into scalable data solutions.
Education & Credentials
Bachelor's degree in Computer Science, Data Engineering, Data Science, or a related field (Master's preferred).
Relevant certifications are a plus (e.g., Azure AI Engineer, Databricks ML, Google Professional Data Engineer).
Preferred Qualifications
Experience with real-time ingestion using Kafka, Kinesis, Event Hub, or similar.
Exposure to MLOps practices and CI/CD for data pipelines.
Background in logistics, warehousing, fulfillment, or similar operational domains.