REMOTE CONTRACT - Senior Reference Data Analyst with Private Markets domain experience is MUST
Boston, MA jobs
Please send current resumes directly to ************************* Bhagyashree Yewle, Principal Lead Recruiter - YOH SPG ********************************************* 100% REMOTE CONTRACT - Senior Reference Data & Investment Data Analyst with Private Markets domain experience is MUST
Location: 100% remote working EST hours. MUST be physically based in the US and EST/CST time zone.
MUST HAVE
Investment Business Analyst / Data Analyst. Private Markets domain experience is MUST
10+ years of Data Analysis experience working with investment data
10+ years working with SQL
Significant experience Data Mapping, Data Modelling, and data extractions
Strong understanding of reference data.
Experience with Privates, Hedge Funds.
12+ month contract opportunity
Estimated Min Rate: $70.00
Estimated Max Rate: $80.00
What's In It for You?
We welcome you to be a part of the largest and legendary global staffing companies to meet your career aspirations. Yoh's network of client companies has been employing professionals like you for over 65 years in the U.S., UK and Canada. Join Yoh's extensive talent community that will provide you with access to Yoh's vast network of opportunities and gain access to this exclusive opportunity available to you. Benefit eligibility is in accordance with applicable laws and client requirements. Benefits include:
Medical, Prescription, Dental & Vision Benefits (for employees working 20+ hours per week)
Health Savings Account (HSA) (for employees working 20+ hours per week)
Life & Disability Insurance (for employees working 20+ hours per week)
MetLife Voluntary Benefits
Employee Assistance Program (EAP)
401K Retirement Savings Plan
Direct Deposit & weekly epayroll
Referral Bonus Programs
Certification and training opportunities
Note: Any pay ranges displayed are estimations. Actual pay is determined by an applicant's experience, technical expertise, and other qualifications as listed in the job description. All qualified applicants are welcome to apply.
Yoh, a Day & Zimmermann company, is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran.
Visit ************************************************ to contact us if you are an individual with a disability and require accommodation in the application process.
For California applicants, qualified applicants with arrest or conviction records will be considered for employment in accordance with the Los Angeles County Fair Chance Ordinance for Employers and the California Fair Chance Act. All of the material job duties described in this posting are job duties for which a criminal history may have a direct, adverse, and negative relationship potentially resulting in the withdrawal of a conditional offer of employment.
It is unlawful in Massachusetts to require or administer a lie detector test as a condition of employment or continued employment. An employer who violates this law shall be subject to criminal penalties and civil liability.
By applying and submitting your resume, you authorize Yoh to review and reformat your resume to meet Yoh's hiring clients' preferences. To learn more about Yoh's privacy practices, please see our Candidate Privacy Notice: **********************************
Data Modeler II
Houston, TX jobs
Job Title: Data Modeler II
Type: W2 Contract (USA)/INC or T4 (Canada)
Work Setup: Hybrid (On-site with flexibility to work from home two days per week)
Industry: Oil & Gas
Benefits: Health, Dental, Vision
Job Summary
We are seeking a Data Modeler II with a product-driven, innovative mindset to design and implement data solutions that deliver measurable business value for Supply Chain operations. This role combines technical expertise with project management responsibilities, requiring collaboration with IT teams to develop solutions for small and medium-sized business challenges. The ideal candidate will have hands-on experience with data transformation, AI integration, and ERP systems, while also being able to communicate technical concepts in clear, business-friendly language.
Key Responsibilities
Develop innovative data solutions leveraging knowledge of Supply Chain processes and oil & gas industry value drivers.
Design and optimize ETL pipelines for scalable, high-performance data processing.
Integrate solutions with enterprise data platforms and visualization tools.
Gather and clean data from ERP systems for analytics and reporting.
Utilize AI tools and prompt engineering to enhance data-driven solutions.
Collaborate with IT and business stakeholders to deliver medium and low-level solutions for local issues.
Oversee project timelines, resources, and stakeholder engagement.
Document project objectives, requirements, and progress updates.
Translate technical language into clear, non-technical terms for business users.
Support continuous improvement and innovation in data engineering and analytics.
Basic / Required Qualifications
Bachelor's degree in Commerce (SCM), Data Science, Engineering, or related field.
Hands-on experience with:
Python for data transformation.
ETL tools (Power Automate, Power Apps; Databricks is a plus).
Oracle Cloud (Supply Chain and Financial modules).
Knowledge of ERP systems (Oracle Cloud required; SAP preferred).
Familiarity with AI integration and low-code development platforms.
Strong understanding of Supply Chain processes; oil & gas experience preferred.
Ability to manage projects and engage stakeholders effectively.
Excellent communication skills for translating technical concepts into business language.
Required Knowledge / Skills / Abilities
Advanced proficiency in data science concepts, including statistical analysis and machine learning.
Experience with prompt engineering and AI-driven solutions.
Ability to clean and transform data for analytics and reporting.
Strong documentation, troubleshooting, and analytical skills.
Business-focused mindset with technical expertise.
Ability to think outside the box and propose innovative solutions.
Special Job Characteristics
Hybrid work schedule (Wednesdays and Fridays remote).
Ability to work independently and oversee own projects.
SQL Data Analyst
Fort Mill, SC jobs
SQL Data Analyst (Mid-Level) 12+ month contract Fort Mill, SC (ONSITE 5 DAYS A WEEK) We are seeking a highly skilled and detail-oriented SQL Data Analyst to support data governance, quality assurance (UAT), and documentation for our supervisory platforms and data migration initiatives. This role plays a critical part in advancing our data strategy and governance framework by delivering actionable insights, ensuring data accuracy, and supporting cross-functional collaboration across business and technology teams.
The Data Analyst will contribute to enterprise-level data alignment, quality testing, and reporting that drive improved decision-making and operational efficiency.
Core Responsibilities (What They'll Actually Do)
Document and support data flows diagrams (DFDs), source-to-target mappings, and data contracts
Perform data quality checks, validation, and UAT for migration and supervisory systems
Write and execute UAT scripts and test plans
Investigate and resolve data issues, risks, and reporting gaps
Partner with Product Owners, business, and technology teams
Support BAU controls and ongoing data integrity improvements
MUST-HAVE Technical Requirements (Day 1)
Strong SQL experience (non-negotiable)
Experience with data validation, profiling, and testing
Hands-on experience with SSIS and/or Informatica
Experience supporting data migrations or complex enterprise data environments
Ability to document data clearly (DFDs, mappings, testing artifacts)
Data modeling experience
Experience in financial services, supervision, compliance, or regulated environments
Exposure to systems of record (SOR), surveillance platforms, or compliance data
Familiarity with Snowflake, AWS, Tableau, Python, Alteryx
Estimated Min Rate: $50.00
Estimated Max Rate: $55.00
What's In It for You?
We welcome you to be a part of the largest and legendary global staffing companies to meet your career aspirations. Yoh's network of client companies has been employing professionals like you for over 65 years in the U.S., UK and Canada. Join Yoh's extensive talent community that will provide you with access to Yoh's vast network of opportunities and gain access to this exclusive opportunity available to you. Benefit eligibility is in accordance with applicable laws and client requirements. Benefits include:
Medical, Prescription, Dental & Vision Benefits (for employees working 20+ hours per week)
Health Savings Account (HSA) (for employees working 20+ hours per week)
Life & Disability Insurance (for employees working 20+ hours per week)
MetLife Voluntary Benefits
Employee Assistance Program (EAP)
401K Retirement Savings Plan
Direct Deposit & weekly epayroll
Referral Bonus Programs
Certification and training opportunities
Note: Any pay ranges displayed are estimations. Actual pay is determined by an applicant's experience, technical expertise, and other qualifications as listed in the job description. All qualified applicants are welcome to apply.
Yoh, a Day & Zimmermann company, is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran.
Visit ************************************************ to contact us if you are an individual with a disability and require accommodation in the application process.
For California applicants, qualified applicants with arrest or conviction records will be considered for employment in accordance with the Los Angeles County Fair Chance Ordinance for Employers and the California Fair Chance Act. All of the material job duties described in this posting are job duties for which a criminal history may have a direct, adverse, and negative relationship potentially resulting in the withdrawal of a conditional offer of employment.
It is unlawful in Massachusetts to require or administer a lie detector test as a condition of employment or continued employment. An employer who violates this law shall be subject to criminal penalties and civil liability.
By applying and submitting your resume, you authorize Yoh to review and reformat your resume to meet Yoh's hiring clients' preferences. To learn more about Yoh's privacy practices, please see our Candidate Privacy Notice: **********************************
Data Architect - Azure Databricks
Palo Alto, CA jobs
Fractal is a strategic AI partner to Fortune 500 companies with a vision to power every human decision in the enterprise. Fractal is building a world where individual choices, freedom, and diversity are the greatest assets; an ecosystem where human imagination is at the heart of every decision. Where no possibility is written off, only challenged to get better. We believe that a true Fractalite is the one who empowers imagination with intelligence. Fractal has been featured as a Great Place to Work by The Economic Times in partnership with the Great Place to Work Institute and recognized as a ‘Cool Vendor' and a ‘Vendor to Watch' by Gartner.
Please visit Fractal | Intelligence for Imagination for more information about Fractal.
Job Posting Title: Principal Architect - Azure Databricks
Job Description
Seeking a visionary and hands-on Principal Architect to lead large-scale, complex technical initiatives leveraging Databricks within the healthcare payer domain. This role is pivotal in driving data modernization, advanced analytics, and AI/ML solutions for our clients. You will serve as a strategic advisor, technical leader, and delivery expert across multiple engagements.
Responsibilities:
Design & Architecture of Scalable Data Platforms
Design, develop, and maintain large-scale data processing architectures on the Databricks Lakehouse Platform to support business needs such as sales forecasting, trade promotions, supply chain optimization etc...
Architect multi-layer data models including Bronze (raw), Silver (cleansed), and Gold (curated) layers for various domains (e.g., Retail Execution, Digital Commerce, Logistics, Category Management).
Leverage Delta Lake, Unity Catalog, and advanced features of Databricks for governed data sharing, versioning, and reproducibility.
Client & Business Stakeholder Engagement
Partner with business stakeholders to translate functional requirements into scalable technical solutions.
Conduct architecture workshops and solutioning sessions with enterprise IT and business teams to define data-driven use cases
Data Pipeline Development & Collaboration
Collaborate with data engineers and data scientists to develop end-to-end pipelines using PySpark, SQL, DLT (Delta Live Tables), and Databricks Workflows.
Enable data ingestion from diverse sources such as ERP (SAP), POS data, Syndicated Data, CRM, e-commerce platforms, and third-party datasets.
Performance, Scalability, and Reliability
Optimize Spark jobs for performance tuning, cost efficiency, and scalability by configuring appropriate cluster sizing, caching, and query optimization techniques.
Implement monitoring and alerting using Databricks Observability, Ganglia, Cloud-native tools
Security, Compliance & Governance
Design secure architectures using Unity Catalog, role-based access control (RBAC), encryption, token-based access, and data lineage tools to meet compliance policies.
Establish data governance practices including Data Fitness Index, Quality Scores, SLA Monitoring, and Metadata Cataloging.
Adoption of AI Copilots & Agentic Development
Utilize GitHub Copilot, Databricks Assistant, and other AI code agents for:
Writing PySpark, SQL, and Python code snippets for data engineering and ML tasks.
Generating documentation and test cases to accelerate pipeline development.
Interactive debugging and iterative code optimization within notebooks.
Advocate for agentic AI workflows that use specialized agents for:
Data profiling and schema inference.
Automated testing and validation.
Innovation and Continuous Learning
Stay abreast of emerging trends in Lakehouse architectures, Generative AI, and cloud-native tooling.
Evaluate and pilot new features from Databricks releases and partner integrations for modern data stack improvements.
Requirements:
Bachelor's or master's degree in computer science, Information Technology, or a related field.
12-18 years of hands-on experience in data engineering, with at least 5+ years on Databricks Architecture and Apache Spark.
Expertise in building high-throughput, low-latency ETL/ELT pipelines on Azure Databricks using PySpark, SQL, and Databricks-native features.
Familiarity with ingestion frameworks from structured/unstructured data sources including APIs, flat files, RDBMS, and cloud storage (Azure Data Lake Storage Gen2)
Experience designing Lakehouse architectures with bronze, silver, gold layering.
Expertise in optimizing Databricks performance using Delta Lake features such as OPTIMIZE, VACUUM, ZORDER, and Time Travel
Strong understanding of data modelling concepts, star/snowflake schemas, dimensional modelling, and modern cloud-based data warehousing.
Experience with designing Data marts using Databricks SQL warehouse and integrating with BI tools (Power BI, Tableau, etc.).
Hands-on experience designing solutions using Workflows (Jobs), Delta Lake, Delta Live Tables (DLT), Unity Catalog, and MLflow.
Familiarity with Databricks REST APIs, Notebooks, and cluster configurations for automated provisioning and orchestration.
Experience in integrating Databricks with CI/CD pipelines using tools such as Azure DevOps, GitHub Actions.
Knowledge of infrastructure-as-code (Terraform, ARM templates) for provisioning Databricks workspaces and resources
In-depth experience with Azure Cloud services such as ADF, Synapse, ADLS, Key Vault, Azure Monitor, and Azure Security Centre.
Strong understanding of data privacy, access controls, and governance best practices.
Experience working with Unity Catalog, RBAC, tokenization, and data classification frameworks
Worked as a consultant for more than 4-5 years with multiple clients
Contribute to pre-sales, proposals, and client presentations as a subject matter expert.
Participated and Lead RFP responses for your organization. Experience in providing solutions for technical problems and provide cost estimates
Excellent communication skills for stakeholder interaction, solution presentations, and team coordination.
Proven experience leading or mentoring global, cross-functional teams across multiple time zones and engagements.
Ability to work independently in agile or hybrid delivery models, while guiding junior engineers and ensuring solution quality.
Pay:
The wage range for this role takes into account the wide range of factors that are considered in making compensation decisions including but not limited to skill sets; experience and training; licensure and certifications; and other business and organizational needs. The disclosed range estimate has not been adjusted for the applicable geographic differential associated with the location at which the position may be filled. At Fractal, it is not typical for an individual to be hired at or near the top of the range for their role and compensation decisions are dependent on the facts and circumstances of each case. A reasonable estimate of the current range is: $ 200,000 - $300,000. In addition, you may be eligible for a discretionary bonus for the current performance period.
Benefits:
As a full-time employee of the company or as an hourly employee working more than 30 hours per week, you will be eligible to participate in the health, dental, vision, life insurance, and disability plans in accordance with the plan documents, which may be amended from time to time. You will be eligible for benefits on the first day of employment with the Company. In addition, you are eligible to participate in the Company 401(k) Plan after 30 days of employment, in accordance with the applicable plan terms. The Company provides for 11 paid holidays and 12 weeks of Parental Leave. We also follow a “free time” PTO policy, allowing you the flexibility to take time needed for either sick time or vacation.
Fractal provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
Data Modeler
Midland, TX jobs
Job Title: Data Modeler - Net Zero Program Analyst
Type: W2 Contract (12-month duration)
Work Setup: On-site
Industry: Oil & Gas
Benefits: Dental, Healthcare, Vision &401(k)
Airswift is seeking a Data Modeler - Net Zero Program Analyst to join one of our major clients on a 12-month contract. This newly created role supports the company's decarbonization and Net Zero initiatives by managing and analyzing operational data to identify trends and optimize performance. The position involves working closely with operations and analytics teams to deliver actionable insights through data visualization and reporting.
Responsibilities:
Build and maintain Power BI dashboards to monitor emissions, operational metrics, and facility performance.
Extract and organize data from systems such as SiteView, ProCount, and SAP for analysis and reporting.
Conduct data validation and trend analysis to support sustainability and operational goals.
Collaborate with field operations and project teams to interpret data and provide recommendations.
Ensure data consistency across platforms and assist with integration efforts (coordination only, no coding required).
Present findings through clear reports and visualizations for technical and non-technical stakeholders.
Required Skills and Experience:
7+ years of experience in data analysis within Oil & Gas or Energy sectors.
Strong proficiency in Power BI (required).
Familiarity with SiteView, ProCount, and/or SAP (preferred).
Ability to translate operational data into insights that support emissions reduction and facility optimization.
Experience with surface facilities, emissions estimation, or power systems.
Knowledge of other visualization tools (Tableau, Spotfire) is a plus.
High School Diploma or GED required.
Additional Details:
Preference for Midland-based candidates; Houston-based candidates will need to travel to Midland periodically (travel reimbursed).
No per diem offered.
Office-based role with low exposure risk.
SAP Public Cloud Data Management
New York, NY jobs
Manager, SAP Public Cloud Data
Salary Range: $135,000 - $218,000
Introduction
We're seeking an experienced SAP data conversion leader to join a rapidly growing Advisory practice at a leading professional services firm. This role is perfect for a strategic thinker who thrives on complex data challenges and wants to make a significant impact on large-scale SAP S/4HANA Public Cloud implementations. You'll lead the entire data conversion workstream, develop innovative solutions, and mentor teams while working with enterprise clients on their digital transformation journeys. If you're looking for a firm that prioritizes professional growth, offers world-class training, and values collaboration, this is an exceptional opportunity to advance your career.
Required Skills & Qualifications
Minimum 5 years of experience in SAP data conversion and governance
Must have experience for a Big 4
At least one full lifecycle SAP S/4HANA Public Cloud implementation with direct involvement in scoping and designing the data workstream during the sales pursuit phase
Bachelor's degree from an accredited college or university in an appropriate field
Proven expertise in developing and executing end-to-end data conversion strategies, including legacy landscape assessment, source-to-target mapping, and data governance framework design
Demonstrated success managing complete data conversion workstreams within large-scale SAP programs, including planning, risk mitigation, issue resolution, and budget oversight
Strong technical command of data architecture principles with hands-on experience designing ETL pipelines and leading full data migration lifecycles from mock cycles through final cutover
Ability to travel 50-80%
Must be authorized to work in the U.S. without the need for employment-based visa sponsorship now or in the future
Preferred Skills & Qualifications
Experience with SAP BTP (Business Technology Platform) and Datasphere for data orchestration
Track record of developing reusable ETL templates, automation scripts, and governance accelerators
Experience supporting sales pursuits by providing data conversion scope, solution design, and pricing input
Strong leadership and mentoring capabilities with data-focused teams
Day-to-Day Responsibilities
Develop and own comprehensive data conversion strategies, assessing legacy landscapes, defining source-to-target mapping, establishing cleansing protocols, and designing data governance frameworks
Lead the data conversion workstream within SAP S/4HANA programs, managing project plans, budgets, and financials while proactively identifying and resolving risks and issues
Design and oversee data conversion architecture, including ETL pipelines, staging strategies, and validation protocols
Execute hands-on the full data conversion lifecycle, including ETL design, multiple mock cycles, data validation, and final cutover, ensuring alignment with program milestones
Support pursuit directors during sales cycles by providing expert input into data conversion scope, solution design, and pricing
Lead and mentor data conversion teams by assigning tasks, managing delivery quality, and fostering a collaborative culture
Drive efficiency through the development of reusable templates and automation accelerators for future projects
Company Benefits & Culture
Comprehensive, competitive benefits package including medical, dental, and vision coverage
401(k) plans with company contributions
Disability and life insurance
Robust personal well-being benefits supporting mental health
Personal Time Off based on job classification and years of service
Two annual breaks where PTO is not required (year-end and July 4th holiday period)
World-class training facility and leading market tools
Continuous learning and career development opportunities
Collaborative, team-driven culture where you can be your whole self
Fast-growing practice with abundant advancement opportunities
Note: This position does not offer visa sponsorship (H-1B, L-1, TN, O-1, E-3, H-1B1, F-1, J-1, OPT, CPT or any other employment-based visa).
#TECH
AWS Data Architect
San Jose, CA jobs
Fractal is a strategic AI partner to Fortune 500 companies with a vision to power every human decision in the enterprise. Fractal is building a world where individual choices, freedom, and diversity are the greatest assets; an ecosystem where human imagination is at the heart of every decision. Where no possibility is written off, only challenged to get better. We believe that a true Fractalite is the one who empowers imagination with intelligence. Fractal has been featured as a Great Place to Work by The Economic Times in partnership with the Great Place to Work Institute and recognized as a ‘Cool Vendor' and a ‘Vendor to Watch' by Gartner.
Please visit Fractal | Intelligence for Imagination for more information about Fractal.
Fractal is looking for a proactive and driven AWS Lead Data Architect/Engineer to join our cloud and data tech team. In this role, you will work on designing the system architecture and solution, ensuring the platform is scalable while performant, and creating automated data pipelines.
Responsibilities:
Design & Architecture of Scalable Data Platforms
Design, develop, and maintain large-scale data processing architectures on the Databricks Lakehouse Platform to support business needs
Architect multi-layer data models including Bronze (raw), Silver (cleansed), and Gold (curated) layers for various domains (e.g., Retail Execution, Digital Commerce, Logistics, Category Management).
Leverage Delta Lake, Unity Catalog, and advanced features of Databricks for governed data sharing, versioning, and reproducibility.
Client & Business Stakeholder Engagement
Partner with business stakeholders to translate functional requirements into scalable technical solutions.
Conduct architecture workshops and solutioning sessions with enterprise IT and business teams to define data-driven use cases
Data Pipeline Development & Collaboration
Collaborate with data engineers and data scientists to develop end-to-end pipelines using Python, PySpark, SQL
Enable data ingestion from diverse sources such as ERP (SAP), POS data, Syndicated Data, CRM, e-commerce platforms, and third-party datasets.
Performance, Scalability, and Reliability
Optimize Spark jobs for performance tuning, cost efficiency, and scalability by configuring appropriate cluster sizing, caching, and query optimization techniques.
Implement monitoring and alerting using Databricks Observability, Ganglia, Cloud-native tools
Security, Compliance & Governance
Design secure architectures using Unity Catalog, role-based access control (RBAC), encryption, token-based access, and data lineage tools to meet compliance policies.
Establish data governance practices including Data Fitness Index, Quality Scores, SLA Monitoring, and Metadata Cataloging.
Adoption of AI Copilots & Agentic Development
Utilize GitHub Copilot, Databricks Assistant, and other AI code agents for
Writing PySpark, SQL, and Python code snippets for data engineering and ML tasks.
Generating documentation and test cases to accelerate pipeline development.
Interactive debugging and iterative code optimization within notebooks.
Advocate for agentic AI workflows that use specialized agents for
Data profiling and schema inference.
Automated testing and validation.
Innovation and Continuous Learning
Stay abreast of emerging trends in Lakehouse architectures, Generative AI, and cloud-native tooling.
Evaluate and pilot new features from Databricks releases and partner integrations for modern data stack improvements.
Requirements:
Bachelor's or master's degree in computer science, Information Technology, or a related field.
8-12 years of hands-on experience in data engineering, with at least 5+ years on Python and Apache Spark.
Expertise in building high-throughput, low-latency ETL/ELT pipelines on AWS/Azure/GCP using Python, PySpark, SQL.
Excellent hands on experience with workload automation tools such as Airflow, Prefect etc.
Familiarity with building dynamic ingestion frameworks from structured/unstructured data sources including APIs, flat files, RDBMS, and cloud storage
Experience designing Lakehouse architectures with bronze, silver, gold layering.
Strong understanding of data modelling concepts, star/snowflake schemas, dimensional modelling, and modern cloud-based data warehousing.
Experience with designing Data marts using Cloud data warehouses and integrating with BI tools (Power BI, Tableau, etc.).
Experience CI/CD pipelines using tools such as AWS Code commit, Azure DevOps, GitHub Actions.
Knowledge of infrastructure-as-code (Terraform, ARM templates) for provisioning platform resources
In-depth experience with AWS Cloud services such as Glue, S3, Redshift etc.
Strong understanding of data privacy, access controls, and governance best practices.
Experience working with RBAC, tokenization, and data classification frameworks
Excellent communication skills for stakeholder interaction, solution presentations, and team coordination.
Proven experience leading or mentoring global, cross-functional teams across multiple time zones and engagements.
Ability to work independently in agile or hybrid delivery models, while guiding junior engineers and ensuring solution quality
Must be able to work in PST time zone.
Pay:
The wage range for this role takes into account the wide range of factors that are considered in making compensation decisions including but not limited to skill sets; experience and training; licensure and certifications; and other business and organizational needs. The disclosed range estimate has not been adjusted for the applicable geographic differential associated with the location at which the position may be filled. At Fractal, it is not typical for an individual to be hired at or near the top of the range for their role and compensation decisions are dependent on the facts and circumstances of each case. A reasonable estimate of the current range is: $150k - $180k. In addition, you may be eligible for a discretionary bonus for the current performance period.
Benefits:
As a full-time employee of the company or as an hourly employee working more than 30 hours per week, you will be eligible to participate in the health, dental, vision, life insurance, and disability plans in accordance with the plan documents, which may be amended from time to time. You will be eligible for benefits on the first day of employment with the Company. In addition, you are eligible to participate in the Company 401(k) Plan after 30 days of employment, in accordance with the applicable plan terms. The Company provides for 11 paid holidays and 12 weeks of Parental Leave. We also follow a “free time” PTO policy, allowing you the flexibility to take the time needed for either sick time or vacation.
Fractal provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
AWS Data Architect
Santa Rosa, CA jobs
Fractal is a strategic AI partner to Fortune 500 companies with a vision to power every human decision in the enterprise. Fractal is building a world where individual choices, freedom, and diversity are the greatest assets; an ecosystem where human imagination is at the heart of every decision. Where no possibility is written off, only challenged to get better. We believe that a true Fractalite is the one who empowers imagination with intelligence. Fractal has been featured as a Great Place to Work by The Economic Times in partnership with the Great Place to Work Institute and recognized as a ‘Cool Vendor' and a ‘Vendor to Watch' by Gartner.
Please visit Fractal | Intelligence for Imagination for more information about Fractal.
Fractal is looking for a proactive and driven AWS Lead Data Architect/Engineer to join our cloud and data tech team. In this role, you will work on designing the system architecture and solution, ensuring the platform is scalable while performant, and creating automated data pipelines.
Responsibilities:
Design & Architecture of Scalable Data Platforms
Design, develop, and maintain large-scale data processing architectures on the Databricks Lakehouse Platform to support business needs
Architect multi-layer data models including Bronze (raw), Silver (cleansed), and Gold (curated) layers for various domains (e.g., Retail Execution, Digital Commerce, Logistics, Category Management).
Leverage Delta Lake, Unity Catalog, and advanced features of Databricks for governed data sharing, versioning, and reproducibility.
Client & Business Stakeholder Engagement
Partner with business stakeholders to translate functional requirements into scalable technical solutions.
Conduct architecture workshops and solutioning sessions with enterprise IT and business teams to define data-driven use cases
Data Pipeline Development & Collaboration
Collaborate with data engineers and data scientists to develop end-to-end pipelines using Python, PySpark, SQL
Enable data ingestion from diverse sources such as ERP (SAP), POS data, Syndicated Data, CRM, e-commerce platforms, and third-party datasets.
Performance, Scalability, and Reliability
Optimize Spark jobs for performance tuning, cost efficiency, and scalability by configuring appropriate cluster sizing, caching, and query optimization techniques.
Implement monitoring and alerting using Databricks Observability, Ganglia, Cloud-native tools
Security, Compliance & Governance
Design secure architectures using Unity Catalog, role-based access control (RBAC), encryption, token-based access, and data lineage tools to meet compliance policies.
Establish data governance practices including Data Fitness Index, Quality Scores, SLA Monitoring, and Metadata Cataloging.
Adoption of AI Copilots & Agentic Development
Utilize GitHub Copilot, Databricks Assistant, and other AI code agents for
Writing PySpark, SQL, and Python code snippets for data engineering and ML tasks.
Generating documentation and test cases to accelerate pipeline development.
Interactive debugging and iterative code optimization within notebooks.
Advocate for agentic AI workflows that use specialized agents for
Data profiling and schema inference.
Automated testing and validation.
Innovation and Continuous Learning
Stay abreast of emerging trends in Lakehouse architectures, Generative AI, and cloud-native tooling.
Evaluate and pilot new features from Databricks releases and partner integrations for modern data stack improvements.
Requirements:
Bachelor's or master's degree in computer science, Information Technology, or a related field.
8-12 years of hands-on experience in data engineering, with at least 5+ years on Python and Apache Spark.
Expertise in building high-throughput, low-latency ETL/ELT pipelines on AWS/Azure/GCP using Python, PySpark, SQL.
Excellent hands on experience with workload automation tools such as Airflow, Prefect etc.
Familiarity with building dynamic ingestion frameworks from structured/unstructured data sources including APIs, flat files, RDBMS, and cloud storage
Experience designing Lakehouse architectures with bronze, silver, gold layering.
Strong understanding of data modelling concepts, star/snowflake schemas, dimensional modelling, and modern cloud-based data warehousing.
Experience with designing Data marts using Cloud data warehouses and integrating with BI tools (Power BI, Tableau, etc.).
Experience CI/CD pipelines using tools such as AWS Code commit, Azure DevOps, GitHub Actions.
Knowledge of infrastructure-as-code (Terraform, ARM templates) for provisioning platform resources
In-depth experience with AWS Cloud services such as Glue, S3, Redshift etc.
Strong understanding of data privacy, access controls, and governance best practices.
Experience working with RBAC, tokenization, and data classification frameworks
Excellent communication skills for stakeholder interaction, solution presentations, and team coordination.
Proven experience leading or mentoring global, cross-functional teams across multiple time zones and engagements.
Ability to work independently in agile or hybrid delivery models, while guiding junior engineers and ensuring solution quality
Must be able to work in PST time zone.
Pay:
The wage range for this role takes into account the wide range of factors that are considered in making compensation decisions including but not limited to skill sets; experience and training; licensure and certifications; and other business and organizational needs. The disclosed range estimate has not been adjusted for the applicable geographic differential associated with the location at which the position may be filled. At Fractal, it is not typical for an individual to be hired at or near the top of the range for their role and compensation decisions are dependent on the facts and circumstances of each case. A reasonable estimate of the current range is: $150k - $180k. In addition, you may be eligible for a discretionary bonus for the current performance period.
Benefits:
As a full-time employee of the company or as an hourly employee working more than 30 hours per week, you will be eligible to participate in the health, dental, vision, life insurance, and disability plans in accordance with the plan documents, which may be amended from time to time. You will be eligible for benefits on the first day of employment with the Company. In addition, you are eligible to participate in the Company 401(k) Plan after 30 days of employment, in accordance with the applicable plan terms. The Company provides for 11 paid holidays and 12 weeks of Parental Leave. We also follow a “free time” PTO policy, allowing you the flexibility to take the time needed for either sick time or vacation.
Fractal provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
AWS Data Architect
San Francisco, CA jobs
Fractal is a strategic AI partner to Fortune 500 companies with a vision to power every human decision in the enterprise. Fractal is building a world where individual choices, freedom, and diversity are the greatest assets; an ecosystem where human imagination is at the heart of every decision. Where no possibility is written off, only challenged to get better. We believe that a true Fractalite is the one who empowers imagination with intelligence. Fractal has been featured as a Great Place to Work by The Economic Times in partnership with the Great Place to Work Institute and recognized as a ‘Cool Vendor' and a ‘Vendor to Watch' by Gartner.
Please visit Fractal | Intelligence for Imagination for more information about Fractal.
Fractal is looking for a proactive and driven AWS Lead Data Architect/Engineer to join our cloud and data tech team. In this role, you will work on designing the system architecture and solution, ensuring the platform is scalable while performant, and creating automated data pipelines.
Responsibilities:
Design & Architecture of Scalable Data Platforms
Design, develop, and maintain large-scale data processing architectures on the Databricks Lakehouse Platform to support business needs
Architect multi-layer data models including Bronze (raw), Silver (cleansed), and Gold (curated) layers for various domains (e.g., Retail Execution, Digital Commerce, Logistics, Category Management).
Leverage Delta Lake, Unity Catalog, and advanced features of Databricks for governed data sharing, versioning, and reproducibility.
Client & Business Stakeholder Engagement
Partner with business stakeholders to translate functional requirements into scalable technical solutions.
Conduct architecture workshops and solutioning sessions with enterprise IT and business teams to define data-driven use cases
Data Pipeline Development & Collaboration
Collaborate with data engineers and data scientists to develop end-to-end pipelines using Python, PySpark, SQL
Enable data ingestion from diverse sources such as ERP (SAP), POS data, Syndicated Data, CRM, e-commerce platforms, and third-party datasets.
Performance, Scalability, and Reliability
Optimize Spark jobs for performance tuning, cost efficiency, and scalability by configuring appropriate cluster sizing, caching, and query optimization techniques.
Implement monitoring and alerting using Databricks Observability, Ganglia, Cloud-native tools
Security, Compliance & Governance
Design secure architectures using Unity Catalog, role-based access control (RBAC), encryption, token-based access, and data lineage tools to meet compliance policies.
Establish data governance practices including Data Fitness Index, Quality Scores, SLA Monitoring, and Metadata Cataloging.
Adoption of AI Copilots & Agentic Development
Utilize GitHub Copilot, Databricks Assistant, and other AI code agents for
Writing PySpark, SQL, and Python code snippets for data engineering and ML tasks.
Generating documentation and test cases to accelerate pipeline development.
Interactive debugging and iterative code optimization within notebooks.
Advocate for agentic AI workflows that use specialized agents for
Data profiling and schema inference.
Automated testing and validation.
Innovation and Continuous Learning
Stay abreast of emerging trends in Lakehouse architectures, Generative AI, and cloud-native tooling.
Evaluate and pilot new features from Databricks releases and partner integrations for modern data stack improvements.
Requirements:
Bachelor's or master's degree in computer science, Information Technology, or a related field.
8-12 years of hands-on experience in data engineering, with at least 5+ years on Python and Apache Spark.
Expertise in building high-throughput, low-latency ETL/ELT pipelines on AWS/Azure/GCP using Python, PySpark, SQL.
Excellent hands on experience with workload automation tools such as Airflow, Prefect etc.
Familiarity with building dynamic ingestion frameworks from structured/unstructured data sources including APIs, flat files, RDBMS, and cloud storage
Experience designing Lakehouse architectures with bronze, silver, gold layering.
Strong understanding of data modelling concepts, star/snowflake schemas, dimensional modelling, and modern cloud-based data warehousing.
Experience with designing Data marts using Cloud data warehouses and integrating with BI tools (Power BI, Tableau, etc.).
Experience CI/CD pipelines using tools such as AWS Code commit, Azure DevOps, GitHub Actions.
Knowledge of infrastructure-as-code (Terraform, ARM templates) for provisioning platform resources
In-depth experience with AWS Cloud services such as Glue, S3, Redshift etc.
Strong understanding of data privacy, access controls, and governance best practices.
Experience working with RBAC, tokenization, and data classification frameworks
Excellent communication skills for stakeholder interaction, solution presentations, and team coordination.
Proven experience leading or mentoring global, cross-functional teams across multiple time zones and engagements.
Ability to work independently in agile or hybrid delivery models, while guiding junior engineers and ensuring solution quality
Must be able to work in PST time zone.
Pay:
The wage range for this role takes into account the wide range of factors that are considered in making compensation decisions including but not limited to skill sets; experience and training; licensure and certifications; and other business and organizational needs. The disclosed range estimate has not been adjusted for the applicable geographic differential associated with the location at which the position may be filled. At Fractal, it is not typical for an individual to be hired at or near the top of the range for their role and compensation decisions are dependent on the facts and circumstances of each case. A reasonable estimate of the current range is: $150k - $180k. In addition, you may be eligible for a discretionary bonus for the current performance period.
Benefits:
As a full-time employee of the company or as an hourly employee working more than 30 hours per week, you will be eligible to participate in the health, dental, vision, life insurance, and disability plans in accordance with the plan documents, which may be amended from time to time. You will be eligible for benefits on the first day of employment with the Company. In addition, you are eligible to participate in the Company 401(k) Plan after 30 days of employment, in accordance with the applicable plan terms. The Company provides for 11 paid holidays and 12 weeks of Parental Leave. We also follow a “free time” PTO policy, allowing you the flexibility to take the time needed for either sick time or vacation.
Fractal provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
Data Analyst PySpark A B Testing
Sunnyvale, CA jobs
In the role of Data Analyst with Pyspark & AB Testing experience, you will be responsible for solving business problem for our Retail/CPG clients through data driven insights. Your role will combine a judicious and tactful blend of Hi-Tech domain, Analytical experience, Client interfacing skills, and solution design and business acumen so your insights not only enlighten the clients but also pave the way for launching deeper into future analysis. You will advise clients and internal teams through short burst high-impact engagements on identifying business problem, solving business problem through suitable approaches and techniques pertaining to learning and technology. You will effectively communicate data-derived insights to non-technical audiences appropriately and mentor junior or aspiring consultant/data scientists. You will play a key role in building components of a framework or product while addressing practical business problems. You will be part of a learning culture, where teamwork and collaboration are encouraged, excellence is rewarded, and diversity is respected and valued.
Required Qualifications:
Bachelor's degree or foreign equivalent required from an accredited institution. Will also consider three years of progressive experience in the specialty in lieu of every year of education.
Candidate must be located within commuting distance of Sunnyvale, CA or be willing to relocate to these locations. This position may require travel within the US.
Candidates authorized to work for any employer in the United States without employer-based visa sponsorship are welcome to apply. Unable to provide immigration sponsorship for this role at this time.
At least 4 years of experience in Information Technology
Proven years of applied experience in exploratory data analysis, devising, deploying and servicing statistical models
Strong hands-on experience with data mining and data visualization, Tableau, A/B Testing, SQL for developing and creating data pipelines to source and transform Data
Strong experience using Python, Advanced SQL and PySpark
Preferred Qualifications:
Advanced degree with Master's or above in area of quantitative discipline such as Statistics, Applied Math, Operations Research, Computer Science, Engineering or Physics or a related field
Marketing domain background (Web analytics, click stream data analysis, and other KPI's on marketing campaigns)
Knowledge of Machine Learning techniques
Along with competitive pay, as a full-time employee you are also eligible for the following benefits :-
Medical/Dental/Vision/Life Insurance
Long-term/Short-term Disability
Health and Dependent Care Reimbursement Accounts
Insurance (Accident, Critical Illness , Hospital Indemnity, Legal)
401(k) plan and contributions dependent on salary level
Paid holidays plus Paid Time Off
AWS Data Architect
Fremont, CA jobs
Fractal is a strategic AI partner to Fortune 500 companies with a vision to power every human decision in the enterprise. Fractal is building a world where individual choices, freedom, and diversity are the greatest assets; an ecosystem where human imagination is at the heart of every decision. Where no possibility is written off, only challenged to get better. We believe that a true Fractalite is the one who empowers imagination with intelligence. Fractal has been featured as a Great Place to Work by The Economic Times in partnership with the Great Place to Work Institute and recognized as a ‘Cool Vendor' and a ‘Vendor to Watch' by Gartner.
Please visit Fractal | Intelligence for Imagination for more information about Fractal.
Fractal is looking for a proactive and driven AWS Lead Data Architect/Engineer to join our cloud and data tech team. In this role, you will work on designing the system architecture and solution, ensuring the platform is scalable while performant, and creating automated data pipelines.
Responsibilities:
Design & Architecture of Scalable Data Platforms
Design, develop, and maintain large-scale data processing architectures on the Databricks Lakehouse Platform to support business needs
Architect multi-layer data models including Bronze (raw), Silver (cleansed), and Gold (curated) layers for various domains (e.g., Retail Execution, Digital Commerce, Logistics, Category Management).
Leverage Delta Lake, Unity Catalog, and advanced features of Databricks for governed data sharing, versioning, and reproducibility.
Client & Business Stakeholder Engagement
Partner with business stakeholders to translate functional requirements into scalable technical solutions.
Conduct architecture workshops and solutioning sessions with enterprise IT and business teams to define data-driven use cases
Data Pipeline Development & Collaboration
Collaborate with data engineers and data scientists to develop end-to-end pipelines using Python, PySpark, SQL
Enable data ingestion from diverse sources such as ERP (SAP), POS data, Syndicated Data, CRM, e-commerce platforms, and third-party datasets.
Performance, Scalability, and Reliability
Optimize Spark jobs for performance tuning, cost efficiency, and scalability by configuring appropriate cluster sizing, caching, and query optimization techniques.
Implement monitoring and alerting using Databricks Observability, Ganglia, Cloud-native tools
Security, Compliance & Governance
Design secure architectures using Unity Catalog, role-based access control (RBAC), encryption, token-based access, and data lineage tools to meet compliance policies.
Establish data governance practices including Data Fitness Index, Quality Scores, SLA Monitoring, and Metadata Cataloging.
Adoption of AI Copilots & Agentic Development
Utilize GitHub Copilot, Databricks Assistant, and other AI code agents for
Writing PySpark, SQL, and Python code snippets for data engineering and ML tasks.
Generating documentation and test cases to accelerate pipeline development.
Interactive debugging and iterative code optimization within notebooks.
Advocate for agentic AI workflows that use specialized agents for
Data profiling and schema inference.
Automated testing and validation.
Innovation and Continuous Learning
Stay abreast of emerging trends in Lakehouse architectures, Generative AI, and cloud-native tooling.
Evaluate and pilot new features from Databricks releases and partner integrations for modern data stack improvements.
Requirements:
Bachelor's or master's degree in computer science, Information Technology, or a related field.
8-12 years of hands-on experience in data engineering, with at least 5+ years on Python and Apache Spark.
Expertise in building high-throughput, low-latency ETL/ELT pipelines on AWS/Azure/GCP using Python, PySpark, SQL.
Excellent hands on experience with workload automation tools such as Airflow, Prefect etc.
Familiarity with building dynamic ingestion frameworks from structured/unstructured data sources including APIs, flat files, RDBMS, and cloud storage
Experience designing Lakehouse architectures with bronze, silver, gold layering.
Strong understanding of data modelling concepts, star/snowflake schemas, dimensional modelling, and modern cloud-based data warehousing.
Experience with designing Data marts using Cloud data warehouses and integrating with BI tools (Power BI, Tableau, etc.).
Experience CI/CD pipelines using tools such as AWS Code commit, Azure DevOps, GitHub Actions.
Knowledge of infrastructure-as-code (Terraform, ARM templates) for provisioning platform resources
In-depth experience with AWS Cloud services such as Glue, S3, Redshift etc.
Strong understanding of data privacy, access controls, and governance best practices.
Experience working with RBAC, tokenization, and data classification frameworks
Excellent communication skills for stakeholder interaction, solution presentations, and team coordination.
Proven experience leading or mentoring global, cross-functional teams across multiple time zones and engagements.
Ability to work independently in agile or hybrid delivery models, while guiding junior engineers and ensuring solution quality
Must be able to work in PST time zone.
Pay:
The wage range for this role takes into account the wide range of factors that are considered in making compensation decisions including but not limited to skill sets; experience and training; licensure and certifications; and other business and organizational needs. The disclosed range estimate has not been adjusted for the applicable geographic differential associated with the location at which the position may be filled. At Fractal, it is not typical for an individual to be hired at or near the top of the range for their role and compensation decisions are dependent on the facts and circumstances of each case. A reasonable estimate of the current range is: $150k - $180k. In addition, you may be eligible for a discretionary bonus for the current performance period.
Benefits:
As a full-time employee of the company or as an hourly employee working more than 30 hours per week, you will be eligible to participate in the health, dental, vision, life insurance, and disability plans in accordance with the plan documents, which may be amended from time to time. You will be eligible for benefits on the first day of employment with the Company. In addition, you are eligible to participate in the Company 401(k) Plan after 30 days of employment, in accordance with the applicable plan terms. The Company provides for 11 paid holidays and 12 weeks of Parental Leave. We also follow a “free time” PTO policy, allowing you the flexibility to take the time needed for either sick time or vacation.
Fractal provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
Data Architect
Cincinnati, OH jobs
THIS IS A W2 (NOT C2C OR REFERRAL BASED) CONTRACT OPPORTUNITY
REMOTE MOSTLY WITH 1 DAY/MO ONSITE IN CINCINNATI-LOCAL CANDIDATES TAKE PREFERENCE
RATE: $75-85/HR WITH BENEFITS
We are seeking a highly skilled Data Architect to function in a consulting capacity to analyze, redesign, and optimize a Medical Payments client's environment. The ideal candidate will have deep expertise in SQL, Azure cloud services, and modern data architecture principles.
Responsibilities
Design and maintain scalable, secure, and high-performing data architectures.
Lead migration and modernization projects in heavy use production systems.
Develop and optimize data models, schemas, and integration strategies.
Implement data governance, security, and compliance standards.
Collaborate with business stakeholders to translate requirements into technical solutions.
Ensure data quality, consistency, and accessibility across systems.
Required Qualifications
Bachelor's degree in Computer Science, Information Systems, or related field.
Proven experience as a Data Architect or similar role.
Strong proficiency in SQL (query optimization, stored procedures, indexing).
Hands-on experience with Azure cloud services for data management and analytics.
Knowledge of data modeling, ETL processes, and data warehousing concepts.
Familiarity with security best practices and compliance frameworks.
Preferred Skills
Understanding of Electronic Health Records systems.
Understanding of Big Data technologies and modern data platforms outside the scope of this project.
Data Modeler 1 (8337)
Spokane, WA jobs
ACS Professional Staffing is looking for an employee to work on-site with our client. This Data Modeler 1 will implement RTU, NETMOM, and ICCP (OAGMOM) data model changes on defined timelines and validate SCADA-Network interdependencies to protect grid safety and reliability. Will develop and maintain dispatcher-facing UI displays (one-line, IPS/RAS, tabular) and support bi weekly and emergency deployments of updated EMS databases and displays to operational systems. Will maintain documentation and troubleshoot issues, collaborate across EMS, dispatch, T&E, field, and stakeholder teams to set modeling/display standards, and ensure records are marked and maintained per INFOSEC/IGLM compliance and audit requirements. This full-time position is located in Spokane, WA.
Pay Rate: $38.87 - $55.53
Benefits:
Paid holidays: 11
PTO: Starting at 10 days
Sick Leave: Up to 56 hours per year (prorated based on start date)
EAP: Employee Assistance Program
Benefit Options Available: Medical, Dental, Vision, FSA, DCA, LPFSA, HSA, Group Life/AD&D, Voluntary Life/AD&D, Voluntary Short-Term Disability, Voluntary Long-Term Disability, Voluntary Critical Illness, Voluntary Accident, Hospital Indemnity, 401k (immediately eligible for employee and employer contributions - employer match up to 4%)
Other benefits include the following: Calm App, LifeBalance Discount Program
Responsibilities:
Implement changes according to defined timelines in the modeling of Remote Terminal Unit (RTU) data in the SCADAMOM database, substation interconnections in the NETMOM database, and of Inter-Control Center Communications Protocol (ICCP) data in the OAGMOM database.
Validate models, including interdependence between the Supervisory Control and Data Acquisition (SCADA) and Network models critical to the safety and reliability of the power grid.
Develop and maintain user-interface displays in accordance with dispatcher specifications, including one-line diagrams, Intertie Protection Scheme (IPS) and Remedial Action Scheme (RAS) displays and tabular displays.
Provide support for the bi-weekly deployment of updated EMS databases and displays to the operational systems, coordinating closely with Dispatchers at both Dittmer Control Center (DCC) and Munro Control Center (MCC) and assist with unscheduled/emergency deployments
Develop and maintain documentation related to all EMS database and display maintenance and troubleshooting.
Collaborate with EMS teams, Dittmer and Munro Dispatch, T&E, the field and other stakeholders to develop modeling and user display standards to promote reliability and consistency.
Mark documents and maintain filing system(s), files, emails, and records in accordance with compliance requirements. Share and disperse documents only to appropriate personnel (those with a Lawful Government Purpose (LGP) to know). Mark and maintain all official records in accordance with the Information Security (INFOSEC) and Information Governance & Lifecycle Management (IGLM) standards and procedures. Validate official records are accurately maintained for auditing purposes.
Requirements:
Associate's or Bachelor's Degree in Computer/Information Technology or a closely-related technical discipline is preferred:
1 year of experience is required with an applicable Bachelor's Degree.
3 years of experience is required with an applicable Associate's Degree.
5 years of experience is required without a degree or applicable degree.
Experience should include working with logical data models, not just physical/DBA experience.
1 year of experience directly supporting the development and maintenance of GE-Alstom e-terra Habitat hierarchical databases and displays,
OR 2 years of experience directly supporting database modeling in a science/engineering organization.
1 year of hands-on experience working with hierarchical databases in GE-Alstom products to model power grid information for use in a control center environment.
Experience conducting data modeling sessions with business users and technical staff, which includes conveying complex ideas both verbally and in written form.
Experience using JIRA for work tracking.
Knowledge of and understanding of database management system administration with regard to performance, configuration and denormalization.
Experience leading efforts to verify system test data is loaded accurately and in timely manner.
Knowledge of GE-Alstom e-terra Habitat databases and displays.
Knowledge of NERC-CIP and FISMA requirements.
Experience with Microsoft Visual Studio
Work sponsorship is not available at this time. Third-party candidates will not be considered for this position.
Because we are a federal government contractor, we have special restrictions placed on us for hiring foreign nationals into certain key positions within the company. This particular position requires U.S. citizenship.
ACS Professional Staffing will provide equal employment opportunities to all applicants without regard to the applicant's race, color, religion, sex, gender, genetic information, national origin, age, veteran status, disability status, or any other status protected by federal or state law. The company will provide reasonable accommodations to allow an applicant to participate in the hiring process if so requested.
If you have any questions about the job posting, please contact recruiting@acsprostaffing.com
If you have any questions about our Reasonable Accommodation Policy, please feel free to email hr@acsprostaffing.com
Data Architect
Washington, DC jobs
Job Title: Developer Premium I
Duration: 7 Months with long term extension
Hybrid Onsite: 4 days per week from Day 1, with a full transition to 100% onsite anticipated soon
Job Requirement:
Strong expertise in Data Architecture & Date model design.
MS Azure (core experiment)
Experience with SAP ECC preferred
SAFE agile certification is a plus
Ability to work flexibility including off hours to support critical IT task & migration activities.
Educational Qualifications and Experience:
Bachelor's degree in Computer Science, Information Systems or in a related area of expertise.
Required number of years of proven experience in the specific technology/toolset as per Experience Matrix below for each Level.
Essential Job Functions:
Take functional specs and produce high quality technical specs
Take technical specs and produce completed and well tested programs which meet user satisfaction and acceptance, and precisely reflect the requirements - business logic, performance, and usability requirements
Conduct/attend requirements definition meetings with end-users and document system/business requirements
Conduct Peer Review on Code and Test Cases, prepared by other team members, to assess quality and compliance with coding standards
As required for the role, perform end-user demos of proposed solution and finished product, provide end user training and provide support for user acceptance testing
As required for the role, troubleshoot production support issues and find appropriate solutions within defined SLA to ensure minimal disruption to business operations
Ensure that Bank policies, procedures, and standards are factored into project design and development
As required for the role, install new release, and participate in upgrade activities
As required for the role, perform integration between systems that are on prem and also on the cloud and third-party vendors
As required for the role, collaborate with different teams within the organization for infrastructure, integration, database administration support
Adhere to project schedules and report progress regularly
Prepare weekly status reports and participate in status meetings and highlight issues and constraints that would impact timely delivery of work program items
Find the appropriate tools to implement the project
Maintain knowledge of current industry standards and practices
As needed, interact and collaborate with Enterprise Architects (EA), Office of Information Security (OIS) to obtain approvals and accreditations
“Mindlance is an Equal Opportunity Employer and does not discriminate in employment on the basis of - Minority/Gender/Disability/Religion/LGBTQI/Age/Veterans.”
Data Modeler
Austin, TX jobs
We are seeking a Data Modeler role in Austin, TX.
Onsite role
The Data Modeler will design, develop, and maintain complex data models that support higher education data initiatives. This role requires expertise in data modeling, database design, and data governance to ensure accurate and efficient data storage, retrieval, and processing. This position will work with cross functional teams and an outside vendor to logically model the flow of data between agency systems. The ideal candidate will have experience in higher education, financial, or government data systems, working with relational and non-relational databases, and implementing best practices in data architecture.
Required skills:
4 years of experience in data modeling, database design, and data architecture.
Experience developing conceptual, logical, and physical data models.
Excellent communication skills, both verbal and written.
Proven ability to work on projects, ensuring timely completion within budget.
Proficiency in SQL and database management systems such as SQL Server, Oracle, or PostgreSQL.
Ability to implement data governance frameworks and ensure data quality.
Knowledge of ETL processes and data integration methodologies.
Experience documenting requirements for IT and business solutions that will meet program and user needs.
Experience working in cross-functional teams with business analysts, developers, and data engineers.
Experience working with sensitive data in higher education, financial, or government sectors.
Preferred Skills:
Experience in Agile development and backlogs.
This is a long-term contract opportunity for an On-site role in Austin, TX. No sponsorship can be provided. Candidates must be able to pass a background check. If this interests you, please send your resume to *****************************
Luna Data Solutions, Inc. provides equal employment opportunities to all employees. All applicants will be considered for employment, and prohibits discrimination and harassment of any type without regard to age, race, color, religion, sexual orientation, gender identity, sex, national origin, genetics, protected veteran status, and disability status.
Data Architect
Plano, TX jobs
KPI Partners is a 5 times Gartner-recognized data, analytics, and AI consulting company. We are leaders in data engineering on Azure, AWS, Google, Snowflake, and Databricks. Founded in 2006, KPI has over 400 consultants and has successfully delivered over 1,000 projects to our clients. We are looking for skilled data engineers who want to work with the best team in data engineering.
Title: Senior Data Architect
Location: Plano, TX (Hybrid)
Job Type: Contract - 6 Months
Key Skills: SQL, PySpark, Databricks, and Azure Cloud
Key Note: Looking for a Data Architect who is Hands-on with SQL, PySpark, Databricks, and Azure Cloud.
About the Role:
We are seeking a highly skilled and experienced Senior Data Architect to join our dynamic team at KPI, working on challenging and multi-year data transformation projects. This is an excellent opportunity for a talented data engineer to play a key role in building innovative data solutions using Azure Native Services and related technologies. If you are passionate about working with large-scale data systems and enjoy solving complex engineering problems, this role is for you.
Key Responsibilities:
Data Engineering: Design, development, and implementation of data pipelines and solutions using PySpark, SQL, and related technologies.
Collaboration: Work closely with cross-functional teams to understand business requirements and translate them into robust data solutions.
Data Warehousing: Design and implement data warehousing solutions, ensuring scalability, performance, and reliability.
Continuous Learning: Stay up to date with modern technologies and trends in data engineering and apply them to improve our data platform.
Mentorship: Provide guidance and mentorship to junior data engineers, ensuring best practices in coding, design, and development.
Must-Have Skills & Qualifications:
Minimum 12+ years of overall experience in IT Industry.
4+ years of experience in data engineering, with a strong background in building large-scale data solutions.
4+ years of hands-on experience developing and implementing data pipelines using Azure stack experience (Azure, ADF, Databricks, Functions)
Proven expertise in SQL for querying, manipulating, and analyzing large datasets.
Strong knowledge of ETL processes and data warehousing fundamentals.
Self-motivated and independent, with a “let's get this done” mindset and the ability to thrive in a fast-paced and dynamic environment.
Good-to-Have Skills:
Databricks Certification is a plus.
Data Modeling, Azure Architect Certification.
Data Analyst - Payroll
Rosemead, CA jobs
Trident Consulting is seeking a "Data Analyst" for one of our clients in “Rosemead, CA - Hybrid" A global leader in business and technology services.
Role: Data analyst
Duration: Contract
Rate: $18-23/Hr
Day-to-Day Responsibilities/Workload
Data Collection & Integration: Gather and consolidate data from diverse sources (SAP, Success Factors), including databases, spreadsheets, and other systems, ensuring accuracy and completeness.
Data Analysis & Reporting: Utilize Power Query and other analytical tools to create clear, insightful reports and summaries that effectively communicate findings to non-technical stakeholders.
Client Support & Issue Resolution: Respond to client inquiries through a shared inbox, providing timely and professional assistance. Troubleshoot and resolve issues related to payroll and expense data with attention to detail and accuracy.
Process Improvement: Identify opportunities to streamline data workflows and enhance reporting efficiency through automation and best practices.
Required Skills/Attributes
Advanced Excel, Customer Service Skills, team player.
Desired Skills/Attributes
SAP/ Successful Knowledge; Power Query
About Trident:
Trident Consulting is a premier IT staffing firm providing high-impact workforce solutions to Fortune 500 and mid-market clients. Since 2005, we've specialized in sourcing elite technology and engineering talent for contract, direct hire, and managed services roles. Our expertise spans cloud, AI/ML, cybersecurity, and data analytics, supported by a 3M+ candidate database and a 78% fill ratio. With a highly engaged leadership team and a reputation for delivering hard-to-fill, niche talent, we help organizations build agile, high-performing teams that drive innovation and business success.
Some of our recent awards include:
Trailblazer Women Award 2025 by Consulate General of India in San Francisco.
Ranked as the #1 Women Owned Business Enterprise in the large category by ITServe.
Received the TechServe Excellence award.
Consistently ranked in the Inc. 5000 list of fastest-growing private companies in America.
Recognized in the SF Business Times as one of the Largest Bay Area BIPOC/Minority-Owned Businesses in 2022.
Senior Data Architect
Atlanta, GA jobs
Long-term opportunity with a rapidly growing company!
RESPONSIBILITIES:
Own end-to-end data architecture for enterprise SaaS platforms, including both OLTP and analytical serving layers
Design and operate solutions across Azure SQL DB/MI, Azure Databricks with Delta Lake, ADLS Gen2, Synapse Analytics / Microsoft Fabric
Partner with analytics teams on Power BI semantic models, including performance optimization and row-level security (RLS)
Define and implement Information Lifecycle Management (ILM): hot/warm/cold tiers, 2-year OLTP retention, archive/nearline, and a BI mirror that enables rich analytics without impacting production workloads.
Engineer ERP/SAP financial interfaces for idempotency, reconciliation, and traceability; design rollback/de-dup strategies and financial journal integrity controls.
Govern schema evolution/DbVersions to prevent cross-customer regressions while achieving performance gains
Establish data SLOs (freshness, latency, correctness) mapped to customer SLAs; instrument monitoring/alerting and drive continuous improvement.
This is a direct-hire opportunity in Atlanta. Work onsite the first 5-6 months, then transition to a hybrid schedule of 3 days in the office with 2 days remote (flex days).
REQUIRED SKILLS:
10+ years of experience in data or database engineering
5-8+ years owning data or database architecture for customer-facing SaaS or analytics platforms at enterprise scale
Proven experience operating at multi-terabyte scale (5+ TB) with measurable improvements in performance, reliability, and cost
Strong expertise with Azure data technologies
Advanced SQL skills, including query optimization, indexing, partitioning, CDC, caching, and schema versioning
Experience designing audit-ready, SLA-driven data platforms
Strong background in ERP/SAP data integrations, particularly financial data
Bachelor's degree
PREFERRED SKILLS:
Power BI performance modeling (RLS, composite models, incremental refresh, DAX optimization).
Modular monolith/microservices experience
Semantic tech (ontology/knowledge graphs), vector stores, and agentic AI orchestration experience
Must be authorized to work in the US. Sponsorships are not available.
Senior Data Engineer
Boston, MA jobs
first PRO is now accepting resumes for a Senior Data Engineer role in Boston, MA. This is a direct hire role and onsite 2-3 days per week.
RESPONSIBILITIES INCLUDE
Support and enhance the firm's Data Governance, BI platforms, and data stores.
Administer and extend data governance tools including Atlan, Monte Carlo, Snowflake, and Power BI.
Develop production-quality code and data solutions supporting key business initiatives.
Conduct architecture and code reviews to ensure security, scalability, and quality across deliverables.
Collaborate with the cloud migration, information security, and business analysis teams to design and deploy new applications and migrate existing systems to the cloud.
TECHNOLOGY EXPERIENCE
Hands-on experience supporting SaaS, business-facing applications.
Expertise in Python for data processing, automation, and production-grade development.
Strong knowledge of SQL, data modeling, and data warehouse design (Kimball/star schema preferred).
Experience with Power BI or similar BI/reporting tools.
Familiarity with data pipeline technologies and orchestration tools (e.g., Airflow, dbt).
Experience with Snowflake, Redshift, BigQuery, or Athena.
Understanding of data governance, data quality, and metadata management frameworks.
QUALIFICATIONSBS or MS in Computer Science, Engineering, or a related technical field.
7+ years of professional software or data engineering experience.
Strong foundation in software design and architectural patterns.
Data Engineer
Austin, TX jobs
About the Role
We are seeking a highly skilled Databricks Data Engineer with strong expertise in modern data engineering, Azure cloud technologies, and Lakehouse architectures. This role is ideal for someone who thrives in dynamic environments, enjoys solving complex data challenges, and can lead end-to-end delivery of scalable data solutions.
What We're Looking For
8+ years designing and delivering scalable data pipelines in modern data platforms
Deep experience in data engineering, data warehousing, and enterprise-grade solution delivery
Ability to lead cross-functional initiatives in matrixed teams
Advanced skills in SQL, Python, and ETL/ELT development, including performance tuning
Hands-on experience with Azure, Snowflake, and Databricks, including system integrations
Key Responsibilities
Design, build, and optimize large-scale data pipelines on the Databricks Lakehouse platform
Modernize and enhance cloud-based data ecosystems on Azure, contributing to architecture, modeling, security, and CI/CD
Use Apache Airflow and similar tools for workflow automation and orchestration
Work with financial or regulated datasets while ensuring strong compliance and governance
Drive best practices in data quality, lineage, cataloging, and metadata management
Primary Technical Skills
Develop and optimize ETL/ELT pipelines using Python, PySpark, Spark SQL, and Databricks Notebooks
Design efficient Delta Lake models for reliability and performance
Implement and manage Unity Catalog for governance, RBAC, lineage, and secure data sharing
Build reusable frameworks using Databricks Workflows, Repos, and Delta Live Tables
Create scalable ingestion pipelines for APIs, databases, files, streaming sources, and MDM systems
Automate ingestion and workflows using Python and REST APIs
Support downstream analytics for BI, data science, and application workloads
Write optimized SQL/T-SQL queries, stored procedures, and curated datasets
Automate DevOps workflows, testing pipelines, and workspace configurations
Additional Skills
Azure: Data Factory, Data Lake, Key Vault, Logic Apps, Functions
CI/CD: Azure DevOps
Orchestration: Apache Airflow (plus)
Streaming: Delta Live Tables
MDM: Profisee (nice-to-have)
Databases: SQL Server, Cosmos DB
Soft Skills
Strong analytical and problem-solving mindset
Excellent communication and cross-team collaboration
Detail-oriented with a high sense of ownership and accountability