Principal Data Scientist (Gen AI) = Hybrid = St. Pete
Saint Petersburg, FL jobs
SGS Technologies, we are a software development and staffing company that doesn't simply talk services - we accomplish them. With two decades of experience, we use a combination of latest technological knowledge, future-driven ideas, and specialized skills for multiple forms of IT requirements.
SGS has served numerous clients nationwide in various industries and we want you to be a part of that growth and opportunity!
looking for a Principal Data Scientist (Gen AI) to work on a 12+ Months, Contract (potential Extension or may Convert to Full-time) = at Any One Location (St. Petersburg, FL 33716 / Denver, CO 80246) with a Direct Financial Client = only on W2 for US Citizen or Green Card Holders.
Notes from the Hiring Manager:
• This role requires a hybrid onsite schedule (2-3 days per week) at an Client's physical location, with St. Petersburg, FL or Denver, CO strongly preferred. Fully remote candidates will not be considered.
• Senior-level, Data Science-focused position requiring 3+ years of recent, hands-on Data Science experience.
• Strong proficiency in Python and SQL is essential.
• The role leans more toward Generative AI than traditional Machine Learning within the Data Science space.
• Opportunity for extension based on performance, with potential to convert to full-time for the right candidate.
Summary:
The Lead Data Scientist supports development of advanced decision support systems by employing advanced techniques from data analytics (including statistical analysis) and machine learning, particularly including NLP. The successful candidate will join an innovative and energetic team that develops capabilities that improve the performance of our business units and make them more efficient. This is a hands-on role in which the Data Scientist is expected to carry out a project from start to finish.
Skills:
· Excellent communication skills, with the ability to explain technical concepts to a non-technical audience
· Strong knowledge of Machine Learning techniques, including NLP but the focus is more on the GEN Ai side than ML side
· Knowledge of statistics, including hypothesis testing
· Strong Python skills
· Proficient in SQL
· Comfortable with Unix scripting
· Experience building models that actually delivered value to a business
· Experience deploying systems to a production environment
· Experience working in an Agile environment
· Working knowledge of “big data” technology
Education:
· Minimum of a Bachelor's degree and six (6) years of relevant experience or combination of education, training and experience.
Junior Data Scientist (TS/SCI)
Springfield, VA jobs
We are seeking a junior-level Data Science professional with a strong academic foundation and early hands-on experience to join our team as a Exploitation Specialist. The ideal candidate will hold a bachelor's degree in a data science-related field and bring internship or project experience that demonstrates curiosity, initiative, and a willingness to learn from senior team members. This role is a great opportunity for someone eager to grow their technical skill set while supporting a high-impact mission.
Required Qualifications
Active TS/SCI clearance with the willingness to obtain a CI polygraph
Ability to work onsite in Northern Virginia, 40 hours per week (telework options are extremely limited)
Proficiency with Python and SQL
Preferred Qualifications
Familiarity with GEOINT collection and related NGA/NRO systems
Experience with additional programming languages such as R, JavaScript, HTML, and CSS
Understanding of object-oriented programming
Experience using visualization tools such as Grafana, Tableau, or Kibana
Ability to quickly learn new technologies, adapt to evolving mission requirements, and support the development/testing of new analytic methodologies
Data Scientist with Python ML/NLP
Addison, TX jobs
Role: Data Scientist with Python ML/NLP
Yrs. of experience: 10+ Yrs.
Fulltime
Job Responsibilities:
We're looking for a Data Scientist who can be responsible for designing, building and maintaining document capture applications. The ideal candidate will have a solid background in software engineering with experience in building Machine Learning NLP Models and good familiarity with Gen AI Models.
High Level Skills Required
Primary - 7+ years' as Data Scientist or related roles
Bachelor's degree in Computer Science, or a related technical field
Deep understanding and some exposure to new Gen AI Open-source Models
At least 5 years programming experience in software development and Agile process
At least 5 years Python (or equivalent) programming experience to work with ML/NLP models.
Senior Agentic AI Data Scientist
Bethlehem, PA jobs
We need HANDS ON engineering leaders, not architects.
MUST BE VERY SEASONED DATA SCIENCE ENGINEERS WHO IS WILLING TO DO A SHORT ONLINE TEST
Can sit in Hudson Yards or Bethlehem, PA 2-3 days onsite
Hybrid role - candidates must be able to work onsite in Hudson Yards, NY or Bethlehem, PA
I will not entertain out of state candidates.
We're looking for a very Senior Data Scientist - Agentic AI with strong hands-on experience in AI/ML, LLMs, and intelligent automation. This role will focus on building, deploying, and scaling Agentic AI systems and enterprise-level generative AI solutions that transform business operations and customer experiences.
You'll work on high-visibility projects alongside senior leadership, translating cutting-edge AI research into real-world impact.
Key Responsibilities:
Design and deploy Agentic AI solutions to automate complex workflows.
Operationalize LLMs and generative AI to process unstructured data (contracts, claims, medical records, etc.).
Build autonomous agents and reasoning systems integrated into enterprise platforms.
Partner with engineering and AIOps teams to move models from prototype to production.
Translate research in reinforcement learning and reasoning into business-ready AI applications.
Mentor junior data scientists and establish best practices for scalable AI development.
What We're Looking For:
PhD (2+ yrs) or Master's (10+ yrs) in Statistics, Computer Science, Engineering, or Applied Mathematics.
5+ years of hands-on AI/ML development experience.
Strong programming skills in Python, PyTorch, TensorFlow, LangGraph.
Proven background in machine learning, optimization, and statistical modeling.
Excellent communication, leadership, and cross-functional collaboration skills.
Data Scientist with Gen Ai and Python experience
Palo Alto, CA jobs
About Company,
Droisys is an innovation technology company focused on helping companies accelerate their digital initiatives from strategy and planning through execution. We leverage deep technical expertise, Agile methodologies, and data-driven intelligence to modernize systems of engagement and simplify human/tech interaction.
Amazing things happen when we work in environments where everyone feels a true sense of belonging and when candidates have the requisite skills and opportunities to succeed. At Droisys, we invest in our talent and support career growth, and we are always on the lookout for amazing talent who can contribute to our growth by delivering top results for our clients. Join us to challenge yourself and accomplish work that matters.
Here's the job details,
Data Scientist with Gen Ai and Python experience
Palo Alto CA- 5 days Onsite
Interview Mode:-Phone & F2F
Job Overview:
Competent Data Scientist, who is independent, results driven and is capable of taking business requirements and building out the technologies to generate statistically sound analysis and production grade ML models
DS skills with GenAI and LLM Knowledge,
Expertise in Python/Spark and their related libraries and frameworks.
Experience in building training ML pipelines and efforts involved in ML Model deployment.
Experience in other ML concepts - Real time distributed model inferencing pipeline, Champion/Challenger framework, A/B Testing, Model.
Familiar with DS/ML Production implementation.
Excellent problem-solving skills, with attention to detail, focus on quality and timely delivery of assigned tasks.
Azure cloud and Databricks prior knowledge will be a big plus.
Droisys is an equal opportunity employer. We do not discriminate based on race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law. Droisys believes in diversity, inclusion, and belonging, and we are committed to fostering a diverse work environment.
Data Scientist
Hartford, CT jobs
Dear,
I hope you're doing well! I lead Recruiting for EXL Service, a global leader in the AI & Analytics space here in the U.S. We're building exceptional teams in Data Engineering to power the next wave of intelligent, data-driven solutions for our clients across industries.
Your background in Data Scientist (GenAI + Python) caught my attention, and I'd love to connect to explore how your expertise might align with our exciting opportunities.
If you're open to a conversation, let's set up a quick call to discuss your experience, career aspirations, Compensations and Benefits and the innovative work we're doing at EXL.
Looking forward to connecting! If you are interested please send me your updated word format resume to ****************************** and reach me @ ************.
Job Title: Data Scientist (GenAI + Python)
Location: Hartford, CT (hybrid)
Duration: Full Time
Minimum Experience: 3+ years in Data Science
Role Overview:
We are seeking an experienced Data Scientist with deep expertise in Generative AI and Python to design, develop, and deploy advanced AI models that drive innovation and deliver impactful solutions. This role involves working on cutting-edge projects, leveraging large language models (LLMs), multimodal AI, and generative techniques to solve complex business problems.
Key Responsibilities:
Develop and evaluate LLM-powered solutions such as summarization, classification, extraction and conversational assistants
Design prompt strategies, RAG architectures, and fine-tuning/adapter approaches where appropriate
Apply NLP and ML methods: text classification, topic modeling, semantic search, embeddings, clustering, regression/causal inference as needed
Develop Python-based prototypes and evaluation frameworks for LLM and GenAI use cases
Advanced use of object-oriented programming and functional programming patterns for ML workflows
Key Qualifications:
3+ years' experience in Data Science/ML with demonstrable GenAI/LLM project experience
Strong Python skills (pandas, numpy, scikit-learn; plus LangChain/LLamaIndex OR direct SDKs like OpenAI/Vertex AI/Hugging Face)
Hands-on with embeddings, vector search, RAG, prompt design, and LLM evaluation
Solid grounding in statistics & experimentation: hypothesis testing, confidence intervals, power analysis, A/B testing design
Applied Data Scientist/ Data Science Engineer
Austin, TX jobs
Role: Applied Data Scientist/ Data Science Engineer
Yrs. of experience: 8+ Yrs.
Job type : Fulltime
Job Responsibilities:
You will be part of a team that innovates and collaborates with internal stakeholders to deliver world-class solutions with a customer first mentality. This group is passionate about the data science field and is motivated to find opportunity in, and develop solutions for, evolving challenges.
You will:
Solve business and customer issues utilizing AI/ML - Mandatory
Build prototypes and scalable AI/ML solutions that will be integrated into software products
Collaborate with software engineers, business stakeholders and product owners in an Agile environment
Have complete ownership of model outcomes and drive continuous improvement
Essential Requirements:
Strong coding skills in Python and SQL - Mandatory
Machine Learning knowledge (Deep learning, Information Retrieval (RAG), GenAI , Classification, Forecasting, Regression, etc. on large datasets) with experience in ML model deployment
Ability to work with internal stakeholders to transfer business questions into quantitative problem statements
Ability to effectively communicate data science progress to non-technical internal stakeholders
Ability to lead a team of data scientists is a plus
Experience with Big Data technologies and/or software development is a plus
Machine Learning Engineer / Data Scientist / GenAI
New York, NY jobs
NYC NY / Hybrid
12+ Months
Project - Leveraging Llama to extract cybersecurity insights out of unstructured data from their ticketing system.
Must have strong experience with:
Llama
Python
Hadoop
MCP
Machine Learning (ML)
They need a strong developer - using llama and Hadoop (this is where the data sits), experience with MCP. They have various ways to pull the data out of their tickets but want someone who can come in and make recommendations on the best way to do it and then get it done. They have tight timelines.
Thanks and Regards!
Lavkesh Dwivedi
************************
Amtex System Inc.
28 Liberty Street, 6th Floor | New York, NY - 10005
************
********************
Senior Data Scientist
McLean, VA jobs
Purpose:
As a Data Scientist, you will play a key role in delivering impactful, data-driven solutions for our strategic enterprise clients. This role also offers the opportunity to shape and grow Infocepts' Data Science & AI practice, contributing to high-impact AI/ML initiatives, crafting data-driven narratives for stakeholders, and applying advanced techniques to solve complex business problems from strategy to execution.
Key Result Areas and Activities:
Design, build, and deploy AI/ML solutions using modern cloud and data platforms.
Lead data science projects across industries, ensuring alignment with business goals.
Apply supervised, unsupervised, deep learning, and Generative AI (e.g., LLMs, agentic workflows) techniques to address client use cases.
Collaborate with data engineering teams to optimize model pipelines using Delta Lake and Spark.
Communicate findings effectively through data visualizations and stakeholder presentations.
Drive adoption of MLOps practices for scalable and reliable model deployment.
Contribute to the evolution of Infocepts' Data Science & AI offerings through innovation and knowledge sharing.
Roles & Responsibilities
Essential Skills
5+ years of experience in applied AI and machine/deep learning.
Hands-on experience with Databricks, MLflow, PySpark, and Spark MLlib.
Proficiency in Python and SQL for model development and data manipulation.
Strong understanding of supervised and unsupervised learning, deep learning, and Generative AI.
Familiarity with cloud platforms: AWS, Azure, and GCP.
Solid foundation in advanced statistical methods and probabilistic analysis.
Ability to lead end-to-end AI/ML projects, including design, development, and stakeholder management.
Experience with visualization tools like Tableau, Power BI, or similar.
Familiarity with ML workflow orchestration and MLOps practices.
Desirable Skills
Experience with LLMs (Large Language Models) and agentic AI workflows.
Familiarity with modern data platforms like Snowflake.
Exposure to real-time data processing in cloud-native environments.
Contributions to open-source AI projects or publications in data science communities.
Qualifications
Bachelor's or Master's degree in Computer Science, Data Science, Machine Learning, Statistics, or a related field.
Certifications in cloud platforms (AWS, Azure, GCP) or Databricks are a plus.
Qualities:
Able to consult, write, and present persuasively
Able to work in a self-organized and cross-functional team
Able to iterate based on new information, peer reviews, and feedback
Able to work seamlessly with clients across multiple geographies
Research focused mindset
Excellent analytical, presentation, reporting, documentation and interactive skills
"Infocepts is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to sex, gender identity, sexual orientation, race, color, religion, national origin, disability, protected Veteran status, age, or any other characteristic protected by law."
Sr Data Platform Engineer
Elk Grove, CA jobs
Hybrid role 3X a week in office in Elk Grove, CA; no remote capabilities
This is a direct hire opportunity.
We're seeking a seasoned Senior Data Platform Engineer to design, build, and optimize scalable data solutions that power analytics, reporting, and AI/ML initiatives. This full‑time role is hands‑on, working with architects, analysts, and business stakeholders to ensure data systems are reliable, secure, and high‑performing.
Responsibilites:
Build and maintain robust data pipelines (structured, semi‑structured, unstructured).
Implement ETL workflows with Spark, Delta Lake, and cloud‑native tools.
Support big data platforms (Databricks, Snowflake, GCP) in production.
Troubleshoot and optimize SQL queries, Spark jobs, and workloads.
Ensure governance, security, and compliance across data systems.
Integrate workflows into CI/CD pipelines with Git, Jenkins, Terraform.
Collaborate cross‑functionally to translate business needs into technical solutions.
Qualifications:
7+ years in data engineering with production pipeline experience.
Expertise in Spark ecosystem, Databricks, Snowflake, GCP.
Strong skills in PySpark, Python, SQL.
Experience with RAG systems, semantic search, and LLM integration.
Familiarity with Kafka, Pub/Sub, vector databases.
Proven ability to optimize ETL jobs and troubleshoot production issues.
Agile team experience and excellent communication skills.
Certifications in Databricks, Snowflake, GCP, or Azure.
Exposure to Airflow, BI tools (Power BI, Looker Studio).
Data Engineer (AWS Redshift, BI, Python, ETL)
Manhattan Beach, CA jobs
We are seeking a skilled Data Engineer with strong experience in business intelligence (BI) and data warehouse development to join our team. In this role, you will design, build, and optimize data pipelines and warehouse architectures that support analytics, reporting, and data-driven decision-making. You will work closely with analysts, data scientists, and business stakeholders to ensure reliable, scalable, and high-quality data solutions.
Responsibilities:
Develop and maintain ETL/ELT pipelines for ingesting, transforming, and delivering data.
Design and enhance data warehouse models (star/snowflake schemas) and BI datasets.
Optimize data workflows for performance, scalability, and reliability.
Collaborate with BI teams to support dashboards, reporting, and analytics needs.
Ensure data quality, governance, and documentation across all solutions.
Qualifications:
Proven experience with data engineering tools (SQL, Python, ETL frameworks).
Strong understanding of BI concepts, reporting tools, and dimensional modeling.
Hands-on experience with cloud data platforms (e.g., AWS, Azure, GCP) is a plus.
Excellent problem-solving skills and ability to work in a cross-functional environment.
Data Engineer w/ GCP
Alpharetta, GA jobs
We're looking for a Data Engineer to build and maintain scalable data pipelines and cloud data infrastructure on GCP. The role focuses on BigQuery, Dataflow, and modern ETL/ELT to support analytics and ML workflows.
MUST HAVES
A problem solver with ability to analyze and research complex issues and problems; and proposing actionable solutions and/or strategies.
Solid understanding and hands on experience with major cloud platforms.
Experience in designing and implementing data pipelines.
Must have strong Python, SQL & GCP skills
Responsibilities
Build and optimize batch/streaming pipelines using Dataflow, Pub/Sub, Composer.
Develop and tune BigQuery models, queries, and ingestion processes.
Implement IaC (Terraform), CI/CD, monitoring, and data quality checks.
Ensure data governance, security, and reliable pipeline operations.
Collaborate with data science teams and support Vertex AI-based ML workflows.
Must-Have
Must have strong Python, SQL & GCP skills
3-5+ years of data engineering experience.
Hands-on GCP experience (BigQuery, Dataflow, Pub/Sub).
Solid ETL/ELT and data modeling experience.
Nice-to-Have
GCP certifications, Spark, Kafka, Airflow, dbt/Dataform, Docker/K8s.
Data Engineer
Alpharetta, GA jobs
5 days onsite in Alpharetta, GA
Skills required:
Python
Data Pipeline
Data Analysis
Data Modeling
Must have solid Cloud experience
AI/ML
Strong problem-solving skills
Strong Communication skill
A problem solver with ability to analyze and research complex issues and problems; and proposing actionable solutions and/or strategies.
Solid understanding and hands on experience with major cloud platforms.
Experience in designing and implementing data pipelines.
Must have experience with one of the following: GCP, AWS OR Azure - MUST have the drive to learn GCP.
Data Architect
Piscataway, NJ jobs
Data Solution Architect : The ideal candidate will have a deep understanding of Microsoft data services, including Azure Databricks, Azure Data Factory (ADF), Azure Synapse, and ETL/ELT processes. This role focuses on designing, developing, and maintaining cloud-based data pipelines and solutions to drive our analytics and business intelligence capabilities.
Develop modern data solutions and architecture for cloud-native data platforms.
Build cost-effective infrastructure in Databricks and orchestrate workflows using Databricks/ADF.
Lead data strategy sessions focused on scalability, performance, and flexibility.
Collaborate with customers to implement solutions for data modernization.
Should have 14+ years of experience with last 4 years in implementing Cloud native end-to-end Data Solutions in Databricks from ingestion to consumption to support variety of needs such as Modern Data warehouse, BI, Insights and Analytics
Should have experience in architecture and implementing End to End Modern Data Solutions using Azure and advance data processing frameworks like Databricks etc.
Experience with Databricks, PySpark, and modern data platforms.
Proficiency in cloud-native architecture and data governance.
Strong experience in migrating from on-premises to cloud solutions (Spark, Hadoop to Databricks).
Good appreciation and at least one implementation experience on processing substrates in Data Engineering - such as ETL Tools, Kafka, ELT techniques
Data Mesh and Data Products designing, and implementation knowledge will be an added advantage.
About ValueMomentum
ValueMomentum is a leading solutions provider for the global property & casualty insurance industry, supported by deep domain and technology capabilities. We offer a comprehensive suite of advisory, development, implementation, and maintenance services across the entire P&C insurance value chain. This includes Underwriting, Claims, Distribution, and more, empowering insurers to stay ahead with sustained growth, high performance, and enhanced stakeholder value. Trusted by over 75 insurers, ValueMomentum is one of the largest standalone insurance-focused solutions providers to the US insurance industry.
GCP Data Engineer
Lone Tree, CO jobs
We are looking for an experienced Data Engineer to build and maintain scalable data pipelines on Google Cloud Platform (GCP). In this role, you will be crucial in serving our Cyber Security data mart and supporting security analytics.
Must Have:
• Bachelor's or master's degree in computer science, Information Systems, Engineering, or related field.
• 5+ years of hands-on experience with data management in gathering data from multiple sources and consolidating them into a single centralized location. Transforming the data with business logic in a consumable manner for visualization and data analysis.
• Strong expertise in Google BigQuery, Google Cloud Storage, Cloud Composer, and related Google Cloud Platform (GCP) services.
• Proficiency in Python and SQL for data processing and automation.
• Experience with ETL processes and data pipeline design.
• Excellent problem-solving skills and attention to detail.
• Strong communication and collaboration.
What you're good at
• Design, build, and maintain scalable data pipelines using Google Cloud Platform tools such as BigQuery, Cloud Storage, and Cloud Composer.
• Develop and optimize SQL queries to support data extraction, transformation, and loading (ETL) processes.
• Collaborate with cross-functional teams, including business customers and Subject Matter Experts, to understand data requirements and deliver effective solutions.
• Implement best practices for data quality, data governance, and data security.
• Monitor and troubleshoot data pipeline issues, ensuring high availability and performance.
• Contribute to data architecture decisions to provide recommendations for improving the data pipeline.
• Stay up to date with emerging trends and technologies in cloud-based data engineering and cybersecurity.
• Exceptional communication skills, including the ability to gather relevant data and information, actively listen, dialogue freely, and verbalize ideas effectively.
• Ability to work in an Agile work environment to deliver incremental value to customers by managing and prioritizing tasks.
• Proactively lead investigation and resolution efforts when data issues are identified taking ownership to resolve them in a timely manner.
• Ability to interoperate and document processes and procedures for producing metrics.
Senior Snowflake Data Engineer
Santa Clara, CA jobs
About the job
Why Zensar?
We're a bunch of hardworking, fun-loving, people-oriented technology enthusiasts. We love what we do, and we're passionate about helping our clients thrive in an increasingly complex digital world. Zensar is an organization focused on building relationships, with our clients and with each other-and happiness is at the core of everything we do. In fact, we're so into happiness that we've created a Global Happiness Council, and we send out a Happiness Survey to our employees each year. We've learned that employee happiness requires more than a competitive paycheck, and our employee value proposition-grow, own, achieve, learn (GOAL)-lays out the core opportunities we seek to foster for every employee. Teamwork and collaboration are critical to Zensar's mission and success, and our teams work on a diverse and challenging mix of technologies across a broad industry spectrum. These industries include banking and financial services, high-tech and manufacturing, healthcare, insurance, retail, and consumer services. Our employees enjoy flexible work arrangements and a competitive benefits package, including medical, dental, vision, 401(k), among other benefits. If you are looking for a place to have an immediate impact, to grow and contribute, where we work hard, play hard, and support each other, consider joining team Zensar!
Zensar is seeking an Senior Snowflake Data Engineer -Santa Clara, CA-Work from office all 5 days-This is open for Full time with excellent benefits and growth opportunities and contract role as well.
Job Description:
Key Requirements:
Strong hands-on experience in data engineering using Snowflake with proven ability to build and optimize large-scale data pipelines.
Deep understanding of data architecture principles, including ingestion, transformation, storage, and access control.
Solid experience in system design and solution architecture, focusing on scalability, reliability, and maintainability.
Expertise in ETL/ELT pipeline design, including data extraction, transformation, validation, and load processes.
In-depth knowledge of data modeling techniques (dimensional modeling, star, and snowflake schemas).
Skilled in optimizing compute and storage costs across Snowflake environments.
Strong proficiency in administration, including database design, schema management, user roles, permissions, and access control policies.
Hands-on experience implementing data lineage, quality, and monitoring frameworks.
Advanced proficiency in SQL for data processing, transformation, and automation.
Experience with reporting and visualization tools such as Power BI and Sigma Computing.
Excellent communication and collaboration skills, with the ability to work independently and drive technical initiatives.
Zensar believes that diversity of backgrounds, thought, experience, and expertise fosters the robust exchange of ideas that enables the highest quality collaboration and work product. Zensar is an equal opportunity employer. All employment decisions shall be made without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by federal, state, or local law. Zensar is committed to providing veteran employment opportunities to our service men and women. Zensar is committed to providing equal employment opportunities for persons with disabilities or religious observances, including reasonable accommodation when needed. Accommodations made to facilitate the recruiting process are not a guarantee of future or continued accommodations once hired.
Zensar does not facilitate/sponsor any work authorization for this position.
Candidates who are currently employed by a client or vendor of Zensar may be ineligible for consideration.
Zensar values your privacy. We'll use your data in accordance with our privacy statement located at: *********************************
Data Architect
Saint Louis, MO jobs
Data Engineering Lead/Architect
Contract-to-Hire
Banking Experience is highly preferred
Ideal Candidate: Experienced in a banking data modernization effort previously and can help lead/mentor the data engineering team. This individual needs to have a good solutioning/creative mindset and be willing to speak up.
Data Engineering Lead/Solution Architect
The ideal candidate will have a deep understanding of Microsoft data services, including Azure Fabric, Azure Data Factory (ADF), Azure Synapse, and ETL/ELT processes. This role focuses on designing, developing, and maintaining cloud-based data pipelines and solutions to drive our analytics and business intelligence capabilities.
Key Responsibilities:
Provide technical leadership in modernizing legacy data ingestion, ETL/ELT, and databases to cloud technologies (AWS/Azure).
Demonstrate a self-driven, ownership mindset to navigate ambiguity, resolve constraints, and mitigate risks with minimal supervision.
Implement data access, classification, and security patterns that comply with regulatory standards (PII, locational data, contractual obligations, etc.).
Build strong relationships with technical teams through effective communication, presentation, and collaboration skills.
Collaborate with stakeholders, business analysts, and SMEs to translate business requirements into scalable solutions.
Integrate data from multiple sources into cloud-based architectures, collaborating with cross-functional teams.
Work closely with data scientists, analysts, and stakeholders to meet data requirements with high-quality solutions.
Function within a matrixed team environment, sharing responsibilities across various teams.
Perform data profiling and analysis on both structured and unstructured data.
Design and map ETL/ELT pipelines for new or modified data streams, ensuring integration into on-prem or cloud-based data storage.
Automate, validate and maintain ETL/ELT processes using technologies such as Databricks, ADF, SSIS, Spark, Python, and Scala.
Proactively identify design, scope, or development issues and provide recommendations for improvement.
Conduct unit, system, and integration testing for ETL/ELT solutions, ensuring defects are resolved.
Create detailed documentation for data processes, architectures, and workflows.
Monitor and optimize the performance of data pipelines and databases.
Required Skills and Qualifications:
experience in designing and implementing data warehouse and analytics solutions (on-premise and cloud).
expertise in data warehousing concepts (ETL/ELT, data quality management, privacy/security, MDM) with hands-on experience using ADF, Data Factory, SSIS, and related tools.
experience with cloud data and cloud-native data lakes/warehouses. Microsoft Azure services (Fabric Lakehouse, ADF, Data Factory, Synapse, etc.).
experience in Python, Scala, or Java for use with distributed processing and analytics, such as Spark.
Familiarity with CI/CD practices and tools such as Azure DevOps, Git, or Jenkins.
Soft Skills:
Proven ability to mentor team members and guide best practices for data engineering.
Strong problem-solving skills with high attention to detail.
Excellent communication skills for effective collaboration with diverse teams.
Nice to Have:
Experience with Snowflake, Databricks, AWS
Experience with containerization, microservices, streaming, and event-sourcing architecture patterns.
Knowledge of Kafka, Eventstream, architectures.
Experience with Microsoft Purview
Previous experience in the financial or banking sector.
Familiarity with machine learning concepts and frameworks.
Experience with reporting tools such as Power BI or Tableau.
Education:
Bachelor's degree in Computer Science, Engineering, or a related field (or equivalent experience).
Senior Bigdata Engineer
New Jersey jobs
Greetings!
We are looking for a Big Data Engineer to design, build, and maintain scalable data solutions. This role focuses on developing reliable data pipelines and platforms that support analytics, reporting, and data-driven decision making. The ideal candidate has strong hands-on experience with Python and SQL and is comfortable working with large, complex datasets.
Position: Sr. Big Data Engineer
Location: Whippany NJ (Hybrid)
Contract: Long term contract
Client: One of the largest financial clients.
Responsibilities
Design, develop, and maintain large-scale data pipelines and data platforms
Build efficient ETL and ELT processes using Python and SQL
Optimize data models, queries, and workflows for performance and reliability
Work with structured and unstructured data from multiple sources
Collaborate with data scientists, analysts, and software engineers to support analytics and machine learning use cases
Ensure data quality, consistency, and availability across systems
Monitor and troubleshoot data pipelines in production environments
Document data processes, models, and best practices
Required Qualifications
Strong experience in Python for data processing and pipeline development
Advanced SQL skills, including query optimization and complex data transformations
Experience working with big data technologies such as Spark, Hadoop, or similar frameworks
Solid understanding of data modeling, warehousing, and lakehouse concepts
Experience with cloud data platforms (AWS, Azure, or Google Cloud)
Familiarity with version control systems such as Git
Preferred Qualifications
Experience with workflow orchestration tools such as Airflow or similar
Knowledge of streaming technologies such as Kafka or equivalent
Experience with containerization and deployment tools (Docker, Kubernetes)
Exposure to data governance, security, and compliance best practices
Best Regards,
Data Engineer
Bloomington, MN jobs
Are you an experienced Data Engineer with a desire to excel? If so, then Talent Software Services may have the job for you! Our client is seeking an experienced Data Engineer to work at their company in Bloomington, MN.
Primary Responsibilities/Accountabilities:
Develop and maintain scalable ETL/ELT pipelines using Databricks and Airflow.
Build and optimize Python-based data workflows and SQL queries for large datasets.
Ensure data quality, reliability, and high performance across pipelines.
Collaborate with cross-functional teams to support analytics and reporting requirements.
Monitor, troubleshoot, and improve production data workflows.
Qualifications:
Strong hands-on experience with Databricks, Python, SQL, and Apache Airflow.
6-10+ years of experience in Data Engineering.
Experience with cloud platforms (Azure/AWS/GCP) and big data ecosystems.
Solid understanding of data warehousing, data modelling, and distributed data processing.
Python Data Engineer- THADC5693417
Houston, TX jobs
Must Haves:
Strong proficiency in Python; 5+ years' experience.
Expertise in Fast API and microservices architecture and coding
Linking python based apps with sql and nosql db's
Deployments on docker, Kubernetes and monitoring tools
Experience with Automated testing and test-driven development
Git source control, git actions, ci/cd , VS code and copilot
Expertise in both on prem sql dbs (oracle, sql server, Postgres, db2) and no sql databases
Working knowledge of data warehousing and ETL Able to explain the business functionality of the projects/applications they have worked on
Ability to multi task and simultaneously work on multiple projects.
NO CLOUD - they are on prem
Day to Day:
Insight Global is looking for a Python Data Engineer for one of our largest oil and gas clients in Downtown Houston, TX. This person will be responsible for building python-based relationships between back-end SQL and NoSQL databases, architecting and coding Fast API and Microservices, and performing testing on back-office applications. The ideal candidate will have experience developing applications utilizing python and microservices and implementing complex business functionality utilizing python.