Data Quality Architect
Data architect job at Public Consulting Group
Public Consulting Group LLC (PCG) is a leading public sector solutions implementation and operations improvement firm that partners with health, education, and human services agencies to improve lives. Founded in 1986, PCG employs approximately 2,000 professionals throughout the U.S.-all committed to delivering solutions that change lives for the better. The firm is a member of a family of companies with experience in all 50 states, and clients in three Canadian provinces and Europe. PCG offers clients a multidisciplinary approach to meet challenges, pursue opportunities, and serve constituents across the public sector. To learn more, visit ******************************
This role leads cross-functional efforts to establish and mature enterprise data quality practices as part of broader data governance initiatives. It focuses on implementing standards, rules, and monitoring to ensure data is accurate, consistent, complete, and trusted across systems. The position partners closely with security, compliance, and application owners to address integrity gaps, support stewardship, and drive continuous quality improvement. It also promotes data literacy and fosters a quality-first culture through training and strategic engagement.
Duties & Responsibilities
Leads the Data Quality Workgroup, with a strong focus on data quality frameworks. Provides input and strategic recommendations to mature data quality posture across domains and systems.
Collaborates with cross-functional teams and the COO team to define and implement data quality best practices, standards, metrics, and controls. Ensures accuracy, consistency, completeness, integrity, and trust in enterprise data assets through data quality rules and monitoring processes.
Partners with Enterprise Architecture to establish and maintain clear ownership, stewardship, and accountability structures across corporate data systems, with an emphasis on safeguarding data quality throughout the lifecycle.
Designs and operationalizes processes that ensure data completeness and validity-including robust data acquisition, capture, validation, and profiling mechanisms-across systems and integrations.
Works with functional application owners, InfoSec, and GRC to implement controls and safeguards that ensure ongoing data integrity. Leads data integrity gap analyses and facilitates remediation strategies.
Assesses existing data infrastructure to identify opportunities for improving data quality in SaaS platforms and enterprise systems. Recommends scalable solutions that deliver accurate, complete, and timely information.
Conducts proactive and reactive data quality reviews, identifying anomalies, missing values, duplication, and other issues. Develops and manages remediation plans, including root cause analysis and continuous quality improvement efforts.
Contributes to governance and architecture forums (e.g., Data Quality, Integration, Stewardship, Architecture) to ensure cohesive management of data quality and alignment with broader data governance and enterprise architecture goals.
Promotes data literacy and stewardship, building a quality-first culture by developing training, onboarding, and awareness programs focused on data ownership, quality expectations, and issue resolution workflows.
Stays current with industry best practices and emerging technologies in data quality management, data governance, master data management (MDM), data observability, and related domains.
Other duties as assigned-may include responsibilities in enterprise architecture, business process architecture, information architecture, and support for EA tooling. All efforts should reinforce data quality as a foundational pillar.
The above is intended to describe the general contents and requirements of work being performed by people assigned to this classification. It is not intended to be construed as an exhaustive statement of all duties, responsibilities or skills of personnel so classified.
Required Skills
Proficiency with data governance processes, tools, and frameworks (e.g., DAMA, DMBOK).
Knowledge of data privacy regulations (e.g., GDPR, CCPA) and best practices.
Familiarity with data classification, access control, and data security principles.
Experience in documenting data dictionaries, metadata, and data lineage.
Experience in implementing data quality controls, data audits, and remediation plans.
Excellent analytical and problem-solving skills with keen attention to detail.
Strong communication and collaboration skills to work effectively with cross-functional teams.
Ability to handle multiple tasks and priorities in a fast-paced environment.
Qualifications
Bachelors degree or equivalent combination of education and experience.
5+ years of experience working as an Information Architect, Data Architect, Data Governance Analyst or in a similar role with a focus on data governance, data stewardship, data literacy, master data, metadata management, and data quality.
Certifications in data governance or data management (e.g., DGSP, CDMP) are a plus.
Working Conditions
Office Setting: Remote work setting but 5-10 % of travel may be required for offsite meetings.
Compensation:
Compensation for roles at Public Consulting Group varies depending on a wide array of factors including, but not limited to, the specific office location, role, skill set, and level of experience. As required by applicable law, PCG provides a reasonable range of compensation for this role. In addition, PCG provides a range of benefits for this role, including medical and dental care benefits, 401k, PTO, parental leave, bereavement leave.
Compensation for roles at Public Consulting Group varies depending on a wide array of factors including, but not limited to, the specific office location, role, skill set, and level of experience. As required by applicable law, PCG provides the following reasonable range of compensation for this role: $125,500-$138,700
#LI-AH1
#LI-remote
EEO Statement:
Public Consulting Group is an Equal Opportunity Employer dedicated to celebrating diversity and intentionally creating a culture of inclusion. We believe that we work best when our employees feel empowered and accepted, and that starts by honoring each of our unique life experiences. At PCG, all aspects of employment regarding recruitment, hiring, training, promotion, compensation, benefits, transfers, layoffs, return from layoff, company-sponsored training, education, and social and recreational programs are based on merit, business needs, job requirements, and individual qualifications. We do not discriminate on the basis of race, color, religion or belief, national, social, or ethnic origin, sex, gender identity and/or expression, age, physical, mental, or sensory disability, sexual orientation, marital, civil union, or domestic partnership status, past or present military service, citizenship status, family medical history or genetic information, family or parental status, or any other status protected under federal, state, or local law. PCG will not tolerate discrimination or harassment based on any of these characteristics. PCG believes in health, equality, and prosperity for everyone so we can succeed in changing the ways the public sector, including health, education, technology and human services industries, work.
Auto-ApplyOracle Data Analyst (Exadata)
Dallas, TX jobs
6+ month contract Downtown Dallas, TX (Onsite) Primary responsibilities of the Senior Data Analyst include supporting and analyzing data anomalies for multiple environments including but not limited to Data Warehouse, ODS, Data Replication/ETL Data Management initiatives. The candidate will be in a supporting role and will work closely with Business, DBA, ETL and Data Management team providing analysis and support for complex Data related initiatives. This individual will also be responsible for assisting in initial setup and on-going documentation/configuration related to Data Governance and Master Data Management solutions. This candidate must have a passion for data, along with good SQL, analytical and communication skills.
Responsibilities
Investigate and Analyze data anomalies and data issues reported by Business
Work with ETL, Replication and DBA teams to determine data transformations, data movement and derivations and document accordingly
Work with support teams to ensure consistent and proactive support methodologies are adhered to for all aspects of data movements and data transformations
Assist in break fix and production validation as it relates to data derivations, replication and structures
Assist in configuration and on-going setup of Data Virtualization and Master Data Management tools
Assist in keeping documentation up to date as it relates to Data Standardization definitions, Data Dictionary and Data Lineage
Gather information from various Sources and interpret Patterns and Trends
Ability to work in a team-oriented, fast-paced agile environment managing multiple
priorities
Qualifications
4+ years of experience working in OLTP, Data Warehouse and Big Data databases
4+ years of experience working with Oracle Exadata
4+ years in a Data Analyst role
2+ years writing medium to complex stored procedures a plus
Ability to collaborate effectively and work as part of a team
Extensive background in writing complex queries
Extensive working knowledge of all aspects of Data Movement and Processing, including ETL, API, OLAP and best practices for data tracking
Denodo Experience a plus
Master Data Management a plus
Big Data Experience a plus (Hadoop, MongoDB)
Postgres and Cloud Experience a plus
Estimated Min Rate: $57.40
Estimated Max Rate: $82.00
What's In It for You?
We welcome you to be a part of the largest and legendary global staffing companies to meet your career aspirations. Yoh's network of client companies has been employing professionals like you for over 65 years in the U.S., UK and Canada. Join Yoh's extensive talent community that will provide you with access to Yoh's vast network of opportunities and gain access to this exclusive opportunity available to you. Benefit eligibility is in accordance with applicable laws and client requirements. Benefits include:
Medical, Prescription, Dental & Vision Benefits (for employees working 20+ hours per week)
Health Savings Account (HSA) (for employees working 20+ hours per week)
Life & Disability Insurance (for employees working 20+ hours per week)
MetLife Voluntary Benefits
Employee Assistance Program (EAP)
401K Retirement Savings Plan
Direct Deposit & weekly epayroll
Referral Bonus Programs
Certification and training opportunities
Note: Any pay ranges displayed are estimations. Actual pay is determined by an applicant's experience, technical expertise, and other qualifications as listed in the job description. All qualified applicants are welcome to apply.
Yoh, a Day & Zimmermann company, is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran.
Visit ************************************************ to contact us if you are an individual with a disability and require accommodation in the application process.
For California applicants, qualified applicants with arrest or conviction records will be considered for employment in accordance with the Los Angeles County Fair Chance Ordinance for Employers and the California Fair Chance Act. All of the material job duties described in this posting are job duties for which a criminal history may have a direct, adverse, and negative relationship potentially resulting in the withdrawal of a conditional offer of employment.
It is unlawful in Massachusetts to require or administer a lie detector test as a condition of employment or continued employment. An employer who violates this law shall be subject to criminal penalties and civil liability.
By applying and submitting your resume, you authorize Yoh to review and reformat your resume to meet Yoh's hiring clients' preferences. To learn more about Yoh's privacy practices, please see our Candidate Privacy Notice: **********************************
Data Architect - Azure Databricks
Palo Alto, CA jobs
Fractal is a strategic AI partner to Fortune 500 companies with a vision to power every human decision in the enterprise. Fractal is building a world where individual choices, freedom, and diversity are the greatest assets; an ecosystem where human imagination is at the heart of every decision. Where no possibility is written off, only challenged to get better. We believe that a true Fractalite is the one who empowers imagination with intelligence. Fractal has been featured as a Great Place to Work by The Economic Times in partnership with the Great Place to Work Institute and recognized as a ‘Cool Vendor' and a ‘Vendor to Watch' by Gartner.
Please visit Fractal | Intelligence for Imagination for more information about Fractal.
Job Posting Title: Principal Architect - Azure Databricks
Job Description
Seeking a visionary and hands-on Principal Architect to lead large-scale, complex technical initiatives leveraging Databricks within the healthcare payer domain. This role is pivotal in driving data modernization, advanced analytics, and AI/ML solutions for our clients. You will serve as a strategic advisor, technical leader, and delivery expert across multiple engagements.
Responsibilities:
Design & Architecture of Scalable Data Platforms
Design, develop, and maintain large-scale data processing architectures on the Databricks Lakehouse Platform to support business needs such as sales forecasting, trade promotions, supply chain optimization etc...
Architect multi-layer data models including Bronze (raw), Silver (cleansed), and Gold (curated) layers for various domains (e.g., Retail Execution, Digital Commerce, Logistics, Category Management).
Leverage Delta Lake, Unity Catalog, and advanced features of Databricks for governed data sharing, versioning, and reproducibility.
Client & Business Stakeholder Engagement
Partner with business stakeholders to translate functional requirements into scalable technical solutions.
Conduct architecture workshops and solutioning sessions with enterprise IT and business teams to define data-driven use cases
Data Pipeline Development & Collaboration
Collaborate with data engineers and data scientists to develop end-to-end pipelines using PySpark, SQL, DLT (Delta Live Tables), and Databricks Workflows.
Enable data ingestion from diverse sources such as ERP (SAP), POS data, Syndicated Data, CRM, e-commerce platforms, and third-party datasets.
Performance, Scalability, and Reliability
Optimize Spark jobs for performance tuning, cost efficiency, and scalability by configuring appropriate cluster sizing, caching, and query optimization techniques.
Implement monitoring and alerting using Databricks Observability, Ganglia, Cloud-native tools
Security, Compliance & Governance
Design secure architectures using Unity Catalog, role-based access control (RBAC), encryption, token-based access, and data lineage tools to meet compliance policies.
Establish data governance practices including Data Fitness Index, Quality Scores, SLA Monitoring, and Metadata Cataloging.
Adoption of AI Copilots & Agentic Development
Utilize GitHub Copilot, Databricks Assistant, and other AI code agents for:
Writing PySpark, SQL, and Python code snippets for data engineering and ML tasks.
Generating documentation and test cases to accelerate pipeline development.
Interactive debugging and iterative code optimization within notebooks.
Advocate for agentic AI workflows that use specialized agents for:
Data profiling and schema inference.
Automated testing and validation.
Innovation and Continuous Learning
Stay abreast of emerging trends in Lakehouse architectures, Generative AI, and cloud-native tooling.
Evaluate and pilot new features from Databricks releases and partner integrations for modern data stack improvements.
Requirements:
Bachelor's or master's degree in computer science, Information Technology, or a related field.
12-18 years of hands-on experience in data engineering, with at least 5+ years on Databricks Architecture and Apache Spark.
Expertise in building high-throughput, low-latency ETL/ELT pipelines on Azure Databricks using PySpark, SQL, and Databricks-native features.
Familiarity with ingestion frameworks from structured/unstructured data sources including APIs, flat files, RDBMS, and cloud storage (Azure Data Lake Storage Gen2)
Experience designing Lakehouse architectures with bronze, silver, gold layering.
Expertise in optimizing Databricks performance using Delta Lake features such as OPTIMIZE, VACUUM, ZORDER, and Time Travel
Strong understanding of data modelling concepts, star/snowflake schemas, dimensional modelling, and modern cloud-based data warehousing.
Experience with designing Data marts using Databricks SQL warehouse and integrating with BI tools (Power BI, Tableau, etc.).
Hands-on experience designing solutions using Workflows (Jobs), Delta Lake, Delta Live Tables (DLT), Unity Catalog, and MLflow.
Familiarity with Databricks REST APIs, Notebooks, and cluster configurations for automated provisioning and orchestration.
Experience in integrating Databricks with CI/CD pipelines using tools such as Azure DevOps, GitHub Actions.
Knowledge of infrastructure-as-code (Terraform, ARM templates) for provisioning Databricks workspaces and resources
In-depth experience with Azure Cloud services such as ADF, Synapse, ADLS, Key Vault, Azure Monitor, and Azure Security Centre.
Strong understanding of data privacy, access controls, and governance best practices.
Experience working with Unity Catalog, RBAC, tokenization, and data classification frameworks
Worked as a consultant for more than 4-5 years with multiple clients
Contribute to pre-sales, proposals, and client presentations as a subject matter expert.
Participated and Lead RFP responses for your organization. Experience in providing solutions for technical problems and provide cost estimates
Excellent communication skills for stakeholder interaction, solution presentations, and team coordination.
Proven experience leading or mentoring global, cross-functional teams across multiple time zones and engagements.
Ability to work independently in agile or hybrid delivery models, while guiding junior engineers and ensuring solution quality.
Pay:
The wage range for this role takes into account the wide range of factors that are considered in making compensation decisions including but not limited to skill sets; experience and training; licensure and certifications; and other business and organizational needs. The disclosed range estimate has not been adjusted for the applicable geographic differential associated with the location at which the position may be filled. At Fractal, it is not typical for an individual to be hired at or near the top of the range for their role and compensation decisions are dependent on the facts and circumstances of each case. A reasonable estimate of the current range is: $ 200,000 - $300,000. In addition, you may be eligible for a discretionary bonus for the current performance period.
Benefits:
As a full-time employee of the company or as an hourly employee working more than 30 hours per week, you will be eligible to participate in the health, dental, vision, life insurance, and disability plans in accordance with the plan documents, which may be amended from time to time. You will be eligible for benefits on the first day of employment with the Company. In addition, you are eligible to participate in the Company 401(k) Plan after 30 days of employment, in accordance with the applicable plan terms. The Company provides for 11 paid holidays and 12 weeks of Parental Leave. We also follow a “free time” PTO policy, allowing you the flexibility to take time needed for either sick time or vacation.
Fractal provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
Data Analyst
Cupertino, CA jobs
TITLE: Data Analyst
ANTICIPATED DURATION: 6 months
Responsibilities:
Collaborate with internal teams and external partners to determine data requirements.
Create templates for automated, seamless data collection into databases.
Design and structure databases that capture all relevant information for reporting and analysis.
Link internal and external data sources for meaningful insights.
Create dashboards to highlight key metrics and overall business performance.
Requirements:
Prior experience designing data collection processes and structuring best practices.
Strong analytical skillset; experience building recurring financial reports and visualizations.
Required experience with Python, SQL, and Tableau.
Understanding of consumer credit processes is a plus.
Proven ability to influence and challenge outcomes to drive results.
Excellent written and verbal communication skills.
Big-picture thinker with curiosity and ownership of details.
Strong collaborator with global business partners.
The hourly pay rate range for this position is $65 to $75 (dependent on factors including but not limited to client requirements, experience, statutory considerations, and location). Benefits available to full-time employees: medical, dental, vision, disability, life insurance, 401k and commuter benefits.
Synergis is proud to be an Equal Opportunity Employer. We value diversity and do not discriminate on the basis of race, color, ethnicity, national origin, religion, age, gender, gender identity, political affiliation, sexual orientation, marital status, disability, military/veteran status, or any other status protected by applicable law.
Qualified applicants with arrest or conviction records will be considered for employment in accordance with the requirements of applicable state and local laws, including but not limited to, the San Francisco Fair Chance Ordinance, the City of Los Angeles' Fair Chance Initiative for Hiring Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act.
For immediate consideration, please forward your resume to **********************.
If you require assistance or an accommodation in the application or employment process, please contact us at **********************.
Synergis is a workforce solutions partner serving thousands of businesses and job seekers nationwide. Our digital world has accelerated the need for businesses to build IT ecosystems that enable growth and innovation along with enhancing the Total Experience (TX). Synergis partners with our clients at the intersection of talent and transformation to scale their balanced teams of tech, digital and creative professionals. Learn more about Synergis at *******************
Financial Data Analyst
Alpharetta, GA jobs
Ready to build the future with AI?
At Genpact, we don't just keep up with technology-we set the pace. AI and digital innovation are redefining industries, and we're leading the charge. Genpact's AI Gigafactory, our industry-first accelerator, is an example of how we're scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI, our breakthrough solutions tackle companies' most complex challenges.
If you thrive in a fast-moving, innovation-driven environment, love building and deploying cutting-edge AI solutions, and want to push the boundaries of what's possible, this is your moment.
Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation, our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn, X, YouTube, and Facebook.
Inviting applications for the role of Financial Data Analyst at Alpharetta , GA .
Role : Financial Data Analyst
Location : Alpharetta , GA 30005 / 3 days from Office
Hiring Type: Fulltime with Genpact + Benefits
Responsibilities
Define and execute the product roadmap for AI tooling and data integration initiatives, driving products from concept to launch in a fast-paced, Agile environment.
Translate business needs and product strategy into detailed requirements and user stories.
Collaborate with engineering, data, and AI/ML teams to design and implement data connectors that enable seamless access to internal and external financial datasets.
Partner with data engineering teams to ensure reliable data ingestion, transformation, and availability for analytics and AI models.
Evaluate and work to onboard new data sources, ensuring accuracy, consistency, and completeness of fundamental and financial data.
Continuously assess opportunities to enhance data coverage, connectivity, and usability within AI and analytics platforms.
Monitor and analyze product performance post-launch to drive ongoing optimization and inform future investments.
Facilitate alignment across stakeholders, including engineering, research, analytics, and business partners, ensuring clear communication and prioritization.
Minimum qualifications
Bachelor's degree in Computer Science, Finance, or related discipline. MBA/Master's Degree desired.
5+ years of experience in a similar role
Strong understanding of fundamental and financial datasets, including company financials, market data, and research data.
Proven experience in data integration, particularly using APIs, data connectors, or ETL frameworks to enable AI or analytics use cases.
Familiarity with AI/ML data pipelines, model lifecycle, and related tooling.
Experience working with cross-functional teams in an Agile environment.
Strong analytical, problem-solving, and communication skills with the ability to translate complex concepts into actionable insights.
Prior experience in financial services, investment banking, or research domains.
Excellent organizational and stakeholder management abilities with a track record of delivering data-driven products.
Preferred qualifications
Deep understanding of Python, SQL, or similar scripting languages
Knowledge of cloud data platforms (AWS, GCP, or Azure) and modern data architectures (data lakes, warehouses, streaming)
Familiarity with AI/ML platforms
Understanding of data governance, metadata management, and data security best practices in financial environments.
Experience with API standards (REST, GraphQL) and data integration frameworks.
Demonstrated ability to partner with engineering and data science teams to operationalize AI initiatives.
Why join Genpact?
• Lead AI-first transformation - Build and scale AI solutions that redefine industries
• Make an impact - Drive change for global enterprises and solve business challenges that matter
• Accelerate your career-Gain hands-on experience, world-class training, mentorship, and AI certifications to advance your skills
• Grow with the best - Learn from top engineers, data scientists, and AI experts in a dynamic, fast-moving workplace
• Committed to ethical AI - Work in an environment where governance, transparency, and security are at the core of everything webuild
• Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress
Come join the 140,000+ coders, tech shapers, and growth makers at Genpact and take your career in the only direction that matters: Up.
Let's build tomorrow together.
Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation.
Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training.
AWS Data Architect
San Jose, CA jobs
Fractal is a strategic AI partner to Fortune 500 companies with a vision to power every human decision in the enterprise. Fractal is building a world where individual choices, freedom, and diversity are the greatest assets; an ecosystem where human imagination is at the heart of every decision. Where no possibility is written off, only challenged to get better. We believe that a true Fractalite is the one who empowers imagination with intelligence. Fractal has been featured as a Great Place to Work by The Economic Times in partnership with the Great Place to Work Institute and recognized as a ‘Cool Vendor' and a ‘Vendor to Watch' by Gartner.
Please visit Fractal | Intelligence for Imagination for more information about Fractal.
Fractal is looking for a proactive and driven AWS Lead Data Architect/Engineer to join our cloud and data tech team. In this role, you will work on designing the system architecture and solution, ensuring the platform is scalable while performant, and creating automated data pipelines.
Responsibilities:
Design & Architecture of Scalable Data Platforms
Design, develop, and maintain large-scale data processing architectures on the Databricks Lakehouse Platform to support business needs
Architect multi-layer data models including Bronze (raw), Silver (cleansed), and Gold (curated) layers for various domains (e.g., Retail Execution, Digital Commerce, Logistics, Category Management).
Leverage Delta Lake, Unity Catalog, and advanced features of Databricks for governed data sharing, versioning, and reproducibility.
Client & Business Stakeholder Engagement
Partner with business stakeholders to translate functional requirements into scalable technical solutions.
Conduct architecture workshops and solutioning sessions with enterprise IT and business teams to define data-driven use cases
Data Pipeline Development & Collaboration
Collaborate with data engineers and data scientists to develop end-to-end pipelines using Python, PySpark, SQL
Enable data ingestion from diverse sources such as ERP (SAP), POS data, Syndicated Data, CRM, e-commerce platforms, and third-party datasets.
Performance, Scalability, and Reliability
Optimize Spark jobs for performance tuning, cost efficiency, and scalability by configuring appropriate cluster sizing, caching, and query optimization techniques.
Implement monitoring and alerting using Databricks Observability, Ganglia, Cloud-native tools
Security, Compliance & Governance
Design secure architectures using Unity Catalog, role-based access control (RBAC), encryption, token-based access, and data lineage tools to meet compliance policies.
Establish data governance practices including Data Fitness Index, Quality Scores, SLA Monitoring, and Metadata Cataloging.
Adoption of AI Copilots & Agentic Development
Utilize GitHub Copilot, Databricks Assistant, and other AI code agents for
Writing PySpark, SQL, and Python code snippets for data engineering and ML tasks.
Generating documentation and test cases to accelerate pipeline development.
Interactive debugging and iterative code optimization within notebooks.
Advocate for agentic AI workflows that use specialized agents for
Data profiling and schema inference.
Automated testing and validation.
Innovation and Continuous Learning
Stay abreast of emerging trends in Lakehouse architectures, Generative AI, and cloud-native tooling.
Evaluate and pilot new features from Databricks releases and partner integrations for modern data stack improvements.
Requirements:
Bachelor's or master's degree in computer science, Information Technology, or a related field.
8-12 years of hands-on experience in data engineering, with at least 5+ years on Python and Apache Spark.
Expertise in building high-throughput, low-latency ETL/ELT pipelines on AWS/Azure/GCP using Python, PySpark, SQL.
Excellent hands on experience with workload automation tools such as Airflow, Prefect etc.
Familiarity with building dynamic ingestion frameworks from structured/unstructured data sources including APIs, flat files, RDBMS, and cloud storage
Experience designing Lakehouse architectures with bronze, silver, gold layering.
Strong understanding of data modelling concepts, star/snowflake schemas, dimensional modelling, and modern cloud-based data warehousing.
Experience with designing Data marts using Cloud data warehouses and integrating with BI tools (Power BI, Tableau, etc.).
Experience CI/CD pipelines using tools such as AWS Code commit, Azure DevOps, GitHub Actions.
Knowledge of infrastructure-as-code (Terraform, ARM templates) for provisioning platform resources
In-depth experience with AWS Cloud services such as Glue, S3, Redshift etc.
Strong understanding of data privacy, access controls, and governance best practices.
Experience working with RBAC, tokenization, and data classification frameworks
Excellent communication skills for stakeholder interaction, solution presentations, and team coordination.
Proven experience leading or mentoring global, cross-functional teams across multiple time zones and engagements.
Ability to work independently in agile or hybrid delivery models, while guiding junior engineers and ensuring solution quality
Must be able to work in PST time zone.
Pay:
The wage range for this role takes into account the wide range of factors that are considered in making compensation decisions including but not limited to skill sets; experience and training; licensure and certifications; and other business and organizational needs. The disclosed range estimate has not been adjusted for the applicable geographic differential associated with the location at which the position may be filled. At Fractal, it is not typical for an individual to be hired at or near the top of the range for their role and compensation decisions are dependent on the facts and circumstances of each case. A reasonable estimate of the current range is: $150k - $180k. In addition, you may be eligible for a discretionary bonus for the current performance period.
Benefits:
As a full-time employee of the company or as an hourly employee working more than 30 hours per week, you will be eligible to participate in the health, dental, vision, life insurance, and disability plans in accordance with the plan documents, which may be amended from time to time. You will be eligible for benefits on the first day of employment with the Company. In addition, you are eligible to participate in the Company 401(k) Plan after 30 days of employment, in accordance with the applicable plan terms. The Company provides for 11 paid holidays and 12 weeks of Parental Leave. We also follow a “free time” PTO policy, allowing you the flexibility to take the time needed for either sick time or vacation.
Fractal provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
AWS Data Architect
Santa Rosa, CA jobs
Fractal is a strategic AI partner to Fortune 500 companies with a vision to power every human decision in the enterprise. Fractal is building a world where individual choices, freedom, and diversity are the greatest assets; an ecosystem where human imagination is at the heart of every decision. Where no possibility is written off, only challenged to get better. We believe that a true Fractalite is the one who empowers imagination with intelligence. Fractal has been featured as a Great Place to Work by The Economic Times in partnership with the Great Place to Work Institute and recognized as a ‘Cool Vendor' and a ‘Vendor to Watch' by Gartner.
Please visit Fractal | Intelligence for Imagination for more information about Fractal.
Fractal is looking for a proactive and driven AWS Lead Data Architect/Engineer to join our cloud and data tech team. In this role, you will work on designing the system architecture and solution, ensuring the platform is scalable while performant, and creating automated data pipelines.
Responsibilities:
Design & Architecture of Scalable Data Platforms
Design, develop, and maintain large-scale data processing architectures on the Databricks Lakehouse Platform to support business needs
Architect multi-layer data models including Bronze (raw), Silver (cleansed), and Gold (curated) layers for various domains (e.g., Retail Execution, Digital Commerce, Logistics, Category Management).
Leverage Delta Lake, Unity Catalog, and advanced features of Databricks for governed data sharing, versioning, and reproducibility.
Client & Business Stakeholder Engagement
Partner with business stakeholders to translate functional requirements into scalable technical solutions.
Conduct architecture workshops and solutioning sessions with enterprise IT and business teams to define data-driven use cases
Data Pipeline Development & Collaboration
Collaborate with data engineers and data scientists to develop end-to-end pipelines using Python, PySpark, SQL
Enable data ingestion from diverse sources such as ERP (SAP), POS data, Syndicated Data, CRM, e-commerce platforms, and third-party datasets.
Performance, Scalability, and Reliability
Optimize Spark jobs for performance tuning, cost efficiency, and scalability by configuring appropriate cluster sizing, caching, and query optimization techniques.
Implement monitoring and alerting using Databricks Observability, Ganglia, Cloud-native tools
Security, Compliance & Governance
Design secure architectures using Unity Catalog, role-based access control (RBAC), encryption, token-based access, and data lineage tools to meet compliance policies.
Establish data governance practices including Data Fitness Index, Quality Scores, SLA Monitoring, and Metadata Cataloging.
Adoption of AI Copilots & Agentic Development
Utilize GitHub Copilot, Databricks Assistant, and other AI code agents for
Writing PySpark, SQL, and Python code snippets for data engineering and ML tasks.
Generating documentation and test cases to accelerate pipeline development.
Interactive debugging and iterative code optimization within notebooks.
Advocate for agentic AI workflows that use specialized agents for
Data profiling and schema inference.
Automated testing and validation.
Innovation and Continuous Learning
Stay abreast of emerging trends in Lakehouse architectures, Generative AI, and cloud-native tooling.
Evaluate and pilot new features from Databricks releases and partner integrations for modern data stack improvements.
Requirements:
Bachelor's or master's degree in computer science, Information Technology, or a related field.
8-12 years of hands-on experience in data engineering, with at least 5+ years on Python and Apache Spark.
Expertise in building high-throughput, low-latency ETL/ELT pipelines on AWS/Azure/GCP using Python, PySpark, SQL.
Excellent hands on experience with workload automation tools such as Airflow, Prefect etc.
Familiarity with building dynamic ingestion frameworks from structured/unstructured data sources including APIs, flat files, RDBMS, and cloud storage
Experience designing Lakehouse architectures with bronze, silver, gold layering.
Strong understanding of data modelling concepts, star/snowflake schemas, dimensional modelling, and modern cloud-based data warehousing.
Experience with designing Data marts using Cloud data warehouses and integrating with BI tools (Power BI, Tableau, etc.).
Experience CI/CD pipelines using tools such as AWS Code commit, Azure DevOps, GitHub Actions.
Knowledge of infrastructure-as-code (Terraform, ARM templates) for provisioning platform resources
In-depth experience with AWS Cloud services such as Glue, S3, Redshift etc.
Strong understanding of data privacy, access controls, and governance best practices.
Experience working with RBAC, tokenization, and data classification frameworks
Excellent communication skills for stakeholder interaction, solution presentations, and team coordination.
Proven experience leading or mentoring global, cross-functional teams across multiple time zones and engagements.
Ability to work independently in agile or hybrid delivery models, while guiding junior engineers and ensuring solution quality
Must be able to work in PST time zone.
Pay:
The wage range for this role takes into account the wide range of factors that are considered in making compensation decisions including but not limited to skill sets; experience and training; licensure and certifications; and other business and organizational needs. The disclosed range estimate has not been adjusted for the applicable geographic differential associated with the location at which the position may be filled. At Fractal, it is not typical for an individual to be hired at or near the top of the range for their role and compensation decisions are dependent on the facts and circumstances of each case. A reasonable estimate of the current range is: $150k - $180k. In addition, you may be eligible for a discretionary bonus for the current performance period.
Benefits:
As a full-time employee of the company or as an hourly employee working more than 30 hours per week, you will be eligible to participate in the health, dental, vision, life insurance, and disability plans in accordance with the plan documents, which may be amended from time to time. You will be eligible for benefits on the first day of employment with the Company. In addition, you are eligible to participate in the Company 401(k) Plan after 30 days of employment, in accordance with the applicable plan terms. The Company provides for 11 paid holidays and 12 weeks of Parental Leave. We also follow a “free time” PTO policy, allowing you the flexibility to take the time needed for either sick time or vacation.
Fractal provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
AWS Data Architect
San Francisco, CA jobs
Fractal is a strategic AI partner to Fortune 500 companies with a vision to power every human decision in the enterprise. Fractal is building a world where individual choices, freedom, and diversity are the greatest assets; an ecosystem where human imagination is at the heart of every decision. Where no possibility is written off, only challenged to get better. We believe that a true Fractalite is the one who empowers imagination with intelligence. Fractal has been featured as a Great Place to Work by The Economic Times in partnership with the Great Place to Work Institute and recognized as a ‘Cool Vendor' and a ‘Vendor to Watch' by Gartner.
Please visit Fractal | Intelligence for Imagination for more information about Fractal.
Fractal is looking for a proactive and driven AWS Lead Data Architect/Engineer to join our cloud and data tech team. In this role, you will work on designing the system architecture and solution, ensuring the platform is scalable while performant, and creating automated data pipelines.
Responsibilities:
Design & Architecture of Scalable Data Platforms
Design, develop, and maintain large-scale data processing architectures on the Databricks Lakehouse Platform to support business needs
Architect multi-layer data models including Bronze (raw), Silver (cleansed), and Gold (curated) layers for various domains (e.g., Retail Execution, Digital Commerce, Logistics, Category Management).
Leverage Delta Lake, Unity Catalog, and advanced features of Databricks for governed data sharing, versioning, and reproducibility.
Client & Business Stakeholder Engagement
Partner with business stakeholders to translate functional requirements into scalable technical solutions.
Conduct architecture workshops and solutioning sessions with enterprise IT and business teams to define data-driven use cases
Data Pipeline Development & Collaboration
Collaborate with data engineers and data scientists to develop end-to-end pipelines using Python, PySpark, SQL
Enable data ingestion from diverse sources such as ERP (SAP), POS data, Syndicated Data, CRM, e-commerce platforms, and third-party datasets.
Performance, Scalability, and Reliability
Optimize Spark jobs for performance tuning, cost efficiency, and scalability by configuring appropriate cluster sizing, caching, and query optimization techniques.
Implement monitoring and alerting using Databricks Observability, Ganglia, Cloud-native tools
Security, Compliance & Governance
Design secure architectures using Unity Catalog, role-based access control (RBAC), encryption, token-based access, and data lineage tools to meet compliance policies.
Establish data governance practices including Data Fitness Index, Quality Scores, SLA Monitoring, and Metadata Cataloging.
Adoption of AI Copilots & Agentic Development
Utilize GitHub Copilot, Databricks Assistant, and other AI code agents for
Writing PySpark, SQL, and Python code snippets for data engineering and ML tasks.
Generating documentation and test cases to accelerate pipeline development.
Interactive debugging and iterative code optimization within notebooks.
Advocate for agentic AI workflows that use specialized agents for
Data profiling and schema inference.
Automated testing and validation.
Innovation and Continuous Learning
Stay abreast of emerging trends in Lakehouse architectures, Generative AI, and cloud-native tooling.
Evaluate and pilot new features from Databricks releases and partner integrations for modern data stack improvements.
Requirements:
Bachelor's or master's degree in computer science, Information Technology, or a related field.
8-12 years of hands-on experience in data engineering, with at least 5+ years on Python and Apache Spark.
Expertise in building high-throughput, low-latency ETL/ELT pipelines on AWS/Azure/GCP using Python, PySpark, SQL.
Excellent hands on experience with workload automation tools such as Airflow, Prefect etc.
Familiarity with building dynamic ingestion frameworks from structured/unstructured data sources including APIs, flat files, RDBMS, and cloud storage
Experience designing Lakehouse architectures with bronze, silver, gold layering.
Strong understanding of data modelling concepts, star/snowflake schemas, dimensional modelling, and modern cloud-based data warehousing.
Experience with designing Data marts using Cloud data warehouses and integrating with BI tools (Power BI, Tableau, etc.).
Experience CI/CD pipelines using tools such as AWS Code commit, Azure DevOps, GitHub Actions.
Knowledge of infrastructure-as-code (Terraform, ARM templates) for provisioning platform resources
In-depth experience with AWS Cloud services such as Glue, S3, Redshift etc.
Strong understanding of data privacy, access controls, and governance best practices.
Experience working with RBAC, tokenization, and data classification frameworks
Excellent communication skills for stakeholder interaction, solution presentations, and team coordination.
Proven experience leading or mentoring global, cross-functional teams across multiple time zones and engagements.
Ability to work independently in agile or hybrid delivery models, while guiding junior engineers and ensuring solution quality
Must be able to work in PST time zone.
Pay:
The wage range for this role takes into account the wide range of factors that are considered in making compensation decisions including but not limited to skill sets; experience and training; licensure and certifications; and other business and organizational needs. The disclosed range estimate has not been adjusted for the applicable geographic differential associated with the location at which the position may be filled. At Fractal, it is not typical for an individual to be hired at or near the top of the range for their role and compensation decisions are dependent on the facts and circumstances of each case. A reasonable estimate of the current range is: $150k - $180k. In addition, you may be eligible for a discretionary bonus for the current performance period.
Benefits:
As a full-time employee of the company or as an hourly employee working more than 30 hours per week, you will be eligible to participate in the health, dental, vision, life insurance, and disability plans in accordance with the plan documents, which may be amended from time to time. You will be eligible for benefits on the first day of employment with the Company. In addition, you are eligible to participate in the Company 401(k) Plan after 30 days of employment, in accordance with the applicable plan terms. The Company provides for 11 paid holidays and 12 weeks of Parental Leave. We also follow a “free time” PTO policy, allowing you the flexibility to take the time needed for either sick time or vacation.
Fractal provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
AWS Data Architect
Fremont, CA jobs
Fractal is a strategic AI partner to Fortune 500 companies with a vision to power every human decision in the enterprise. Fractal is building a world where individual choices, freedom, and diversity are the greatest assets; an ecosystem where human imagination is at the heart of every decision. Where no possibility is written off, only challenged to get better. We believe that a true Fractalite is the one who empowers imagination with intelligence. Fractal has been featured as a Great Place to Work by The Economic Times in partnership with the Great Place to Work Institute and recognized as a ‘Cool Vendor' and a ‘Vendor to Watch' by Gartner.
Please visit Fractal | Intelligence for Imagination for more information about Fractal.
Fractal is looking for a proactive and driven AWS Lead Data Architect/Engineer to join our cloud and data tech team. In this role, you will work on designing the system architecture and solution, ensuring the platform is scalable while performant, and creating automated data pipelines.
Responsibilities:
Design & Architecture of Scalable Data Platforms
Design, develop, and maintain large-scale data processing architectures on the Databricks Lakehouse Platform to support business needs
Architect multi-layer data models including Bronze (raw), Silver (cleansed), and Gold (curated) layers for various domains (e.g., Retail Execution, Digital Commerce, Logistics, Category Management).
Leverage Delta Lake, Unity Catalog, and advanced features of Databricks for governed data sharing, versioning, and reproducibility.
Client & Business Stakeholder Engagement
Partner with business stakeholders to translate functional requirements into scalable technical solutions.
Conduct architecture workshops and solutioning sessions with enterprise IT and business teams to define data-driven use cases
Data Pipeline Development & Collaboration
Collaborate with data engineers and data scientists to develop end-to-end pipelines using Python, PySpark, SQL
Enable data ingestion from diverse sources such as ERP (SAP), POS data, Syndicated Data, CRM, e-commerce platforms, and third-party datasets.
Performance, Scalability, and Reliability
Optimize Spark jobs for performance tuning, cost efficiency, and scalability by configuring appropriate cluster sizing, caching, and query optimization techniques.
Implement monitoring and alerting using Databricks Observability, Ganglia, Cloud-native tools
Security, Compliance & Governance
Design secure architectures using Unity Catalog, role-based access control (RBAC), encryption, token-based access, and data lineage tools to meet compliance policies.
Establish data governance practices including Data Fitness Index, Quality Scores, SLA Monitoring, and Metadata Cataloging.
Adoption of AI Copilots & Agentic Development
Utilize GitHub Copilot, Databricks Assistant, and other AI code agents for
Writing PySpark, SQL, and Python code snippets for data engineering and ML tasks.
Generating documentation and test cases to accelerate pipeline development.
Interactive debugging and iterative code optimization within notebooks.
Advocate for agentic AI workflows that use specialized agents for
Data profiling and schema inference.
Automated testing and validation.
Innovation and Continuous Learning
Stay abreast of emerging trends in Lakehouse architectures, Generative AI, and cloud-native tooling.
Evaluate and pilot new features from Databricks releases and partner integrations for modern data stack improvements.
Requirements:
Bachelor's or master's degree in computer science, Information Technology, or a related field.
8-12 years of hands-on experience in data engineering, with at least 5+ years on Python and Apache Spark.
Expertise in building high-throughput, low-latency ETL/ELT pipelines on AWS/Azure/GCP using Python, PySpark, SQL.
Excellent hands on experience with workload automation tools such as Airflow, Prefect etc.
Familiarity with building dynamic ingestion frameworks from structured/unstructured data sources including APIs, flat files, RDBMS, and cloud storage
Experience designing Lakehouse architectures with bronze, silver, gold layering.
Strong understanding of data modelling concepts, star/snowflake schemas, dimensional modelling, and modern cloud-based data warehousing.
Experience with designing Data marts using Cloud data warehouses and integrating with BI tools (Power BI, Tableau, etc.).
Experience CI/CD pipelines using tools such as AWS Code commit, Azure DevOps, GitHub Actions.
Knowledge of infrastructure-as-code (Terraform, ARM templates) for provisioning platform resources
In-depth experience with AWS Cloud services such as Glue, S3, Redshift etc.
Strong understanding of data privacy, access controls, and governance best practices.
Experience working with RBAC, tokenization, and data classification frameworks
Excellent communication skills for stakeholder interaction, solution presentations, and team coordination.
Proven experience leading or mentoring global, cross-functional teams across multiple time zones and engagements.
Ability to work independently in agile or hybrid delivery models, while guiding junior engineers and ensuring solution quality
Must be able to work in PST time zone.
Pay:
The wage range for this role takes into account the wide range of factors that are considered in making compensation decisions including but not limited to skill sets; experience and training; licensure and certifications; and other business and organizational needs. The disclosed range estimate has not been adjusted for the applicable geographic differential associated with the location at which the position may be filled. At Fractal, it is not typical for an individual to be hired at or near the top of the range for their role and compensation decisions are dependent on the facts and circumstances of each case. A reasonable estimate of the current range is: $150k - $180k. In addition, you may be eligible for a discretionary bonus for the current performance period.
Benefits:
As a full-time employee of the company or as an hourly employee working more than 30 hours per week, you will be eligible to participate in the health, dental, vision, life insurance, and disability plans in accordance with the plan documents, which may be amended from time to time. You will be eligible for benefits on the first day of employment with the Company. In addition, you are eligible to participate in the Company 401(k) Plan after 30 days of employment, in accordance with the applicable plan terms. The Company provides for 11 paid holidays and 12 weeks of Parental Leave. We also follow a “free time” PTO policy, allowing you the flexibility to take the time needed for either sick time or vacation.
Fractal provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
Data Governance Analyst
Charlotte, NC jobs
Key Responsibilities:
Identifying and documenting Key Data Elements utilized in high profile dashboards, key business process and other ultimate use of data.
Work closely with stakeholders to understand business processes, IT architecture, data flows (particularly downstream effects) and document system of records (or authoritative data sources), Data Owners, Key Data Elements attributes, Data Lineage.
Together with Data Owners participate in the design and testing of data quality rules to be applied to each Key Data Element.
Maintain the Business Glossary and report inventory (regulatory reports and non-regulatory reports).
Capture data quality issues reported by stakeholders and input detailed information in the Data Quality Incident Management system for tracking purposes.
Produce and monitor Data Quality KPIs.
Support Root Cause analysis when a data quality issue is identified and / or process didn't work as expected.
Document business requirement for future system and/or workflow enhancements and relate such requirements to the Data Governance framework.
Work with data consumers to understand the source, creation process and purpose of data.
Qualifications And Skills:
Demonstrated experience in requirement gathering, documenting functional specification, designing testing scripts, conducting data analysis and gap analysis in tandem with Data Owners and other stakeholders
Ability to present facts, project plans, milestones, achievements and recommended solutions in a concise and intuitive manner.
Highly organized individual with exceptional attention to details, strong sense of accountability and collaboration skills.
Work Experience:
Relevant experience within the Data Governance field for a Financial Institution with focus on: documenting data requirements and data quality rules criteria; data quality issue logging and tracking.
EEO:
“Mindlance is an Equal Opportunity Employer and does not discriminate in employment on the basis of - Minority/Gender/Disability/Religion/LGBTQI/Age/Veterans.”
Data Architect
Cincinnati, OH jobs
THIS IS A W2 (NOT C2C OR REFERRAL BASED) CONTRACT OPPORTUNITY
REMOTE MOSTLY WITH 1 DAY/MO ONSITE IN CINCINNATI-LOCAL CANDIDATES TAKE PREFERENCE
RATE: $75-85/HR WITH BENEFITS
We are seeking a highly skilled Data Architect to function in a consulting capacity to analyze, redesign, and optimize a Medical Payments client's environment. The ideal candidate will have deep expertise in SQL, Azure cloud services, and modern data architecture principles.
Responsibilities
Design and maintain scalable, secure, and high-performing data architectures.
Lead migration and modernization projects in heavy use production systems.
Develop and optimize data models, schemas, and integration strategies.
Implement data governance, security, and compliance standards.
Collaborate with business stakeholders to translate requirements into technical solutions.
Ensure data quality, consistency, and accessibility across systems.
Required Qualifications
Bachelor's degree in Computer Science, Information Systems, or related field.
Proven experience as a Data Architect or similar role.
Strong proficiency in SQL (query optimization, stored procedures, indexing).
Hands-on experience with Azure cloud services for data management and analytics.
Knowledge of data modeling, ETL processes, and data warehousing concepts.
Familiarity with security best practices and compliance frameworks.
Preferred Skills
Understanding of Electronic Health Records systems.
Understanding of Big Data technologies and modern data platforms outside the scope of this project.
Data Architect
Detroit, MI jobs
Millennium Software is look for a Data Architect for one of its direct client based in Michigan. It is onsite role.
Title: Data Architect
Tax term: Only w2, no c2c
Description:
All below are must have
Senior Data Architect with 12+ years of experience in Data Modeling.
Develop conceptual, logical, and physical data models.
Experience with GCP Cloud
Data Analyst
Saint Louis, MO jobs
Our client is seeking a Data Analyst to join their team! This position is located in St. Louis, Missouri.
Assist in gathering and documenting business requirements from stakeholders
Analyze data and provide insights to support decision-making processes
Help in the development and testing of IT solutions and applications
Collaborate with team members to troubleshoot and resolve technical issues
Desired Skills/Experience:
Completed Bachelor's degree from 2024 or 2025
Strong SQL skills
Benefits:
Medical, Dental, & Vision Insurance Plans
Employee-Owned Profit Sharing (ESOP)
401K offered
The approximate pay range for this position starting at $85,000+. Please note that the pay range provided is a good faith estimate. Final compensation may vary based on factors including but not limited to background, knowledge, skills, and location. We comply with local wage minimums.
GIS Data Analyst
Atlanta, GA jobs
Tremendous opportunity for a GIS Analyst to join a stable company making big strides in their industry. This is focused on the engineering side of the business! You will be able to gain experience in this critical technical role in the creation and support of critical feature data.
You will answer ongoing questions and inquiries, generate new geospatial data, and provide flexible GIS service. Regular inventory analysis showing counts, mileage, and measurements of the system is anticipated. You will also directly perform geospatial data preparation and assemblage tasks.This technical role requires that you have a broad geographic understanding, the ability to correctly interpret geometrics and feature placement information received from the field, and a general understanding of the installation and maintenance activities undertaken by the Engineering Department sub-groups.
Responsibilities:
Perform and be responsible for the creation, modification, and quality control of essential geospatial infrastructure data for company wide use for a variety of critical applications, including the analysis and assembly of geospatial data utilized to assist PTC compliance.
Utilize their practical knowledge to ensure field generated spatial data from various sources is properly converted into a functioning theoretical digital network compliant with the database and data model standards established for systems.
Authorized to perform quality control checks, accept, and promote geospatial data change sets that are produced by the Geospatial Data Group's CADD technicians.
Utilize a variety of GIS software tools to perform topology corrections and make geometric modification as part of the data quality review and acceptance process.
Called upon to work with other involved groups and departments in a collaborative manner to fully utilize the technical skill sets of the Geospatial Data Group toward the Enterprise GIS goals.
Assist the Sr. Geospatial Data Analyst with assorted GIS responsibilities as required per business needs.
Leverage the company's investment in geospatial technology to generate value and identify cost savings using GIS technology.
This is a 12 month contract position working out of our office in Midtown Atlanta 4 Days a week with 1 day remote. Our new office is state of the art with many amenities (gym, coffee shop, cafeteria, etc.) and paid parking. This is an excellent opportunity to work within an enterprise environment and an outstanding work-life balance.
REQUIRED:
3+ years experience working with Data in a GIS environment
Strong Communication skills, presenting and working across org. departments
Experience with ESRI Suite (ArcGIS)
Experience with data editing and spatial analysis
Bachelor degree in GIS or similar (computer science, SW engineering, IT, etc.)
PREFERRED:
TFS
Esri JavaScript API
ArcGIS online and ArcGIS pro
Experience with relational databases
Experience with database queries (basic queries)
SQL
Must be authorized to work in the U.S./ Sponsorships are not available
Data Architect
Plano, TX jobs
KPI Partners is a 5 times Gartner-recognized data, analytics, and AI consulting company. We are leaders in data engineering on Azure, AWS, Google, Snowflake, and Databricks. Founded in 2006, KPI has over 400 consultants and has successfully delivered over 1,000 projects to our clients. We are looking for skilled data engineers who want to work with the best team in data engineering.
Title: Senior Data Architect
Location: Plano, TX (Hybrid)
Job Type: Contract - 6 Months
Key Skills: SQL, PySpark, Databricks, and Azure Cloud
Key Note: Looking for a Data Architect who is Hands-on with SQL, PySpark, Databricks, and Azure Cloud.
About the Role:
We are seeking a highly skilled and experienced Senior Data Architect to join our dynamic team at KPI, working on challenging and multi-year data transformation projects. This is an excellent opportunity for a talented data engineer to play a key role in building innovative data solutions using Azure Native Services and related technologies. If you are passionate about working with large-scale data systems and enjoy solving complex engineering problems, this role is for you.
Key Responsibilities:
Data Engineering: Design, development, and implementation of data pipelines and solutions using PySpark, SQL, and related technologies.
Collaboration: Work closely with cross-functional teams to understand business requirements and translate them into robust data solutions.
Data Warehousing: Design and implement data warehousing solutions, ensuring scalability, performance, and reliability.
Continuous Learning: Stay up to date with modern technologies and trends in data engineering and apply them to improve our data platform.
Mentorship: Provide guidance and mentorship to junior data engineers, ensuring best practices in coding, design, and development.
Must-Have Skills & Qualifications:
Minimum 12+ years of overall experience in IT Industry.
4+ years of experience in data engineering, with a strong background in building large-scale data solutions.
4+ years of hands-on experience developing and implementing data pipelines using Azure stack experience (Azure, ADF, Databricks, Functions)
Proven expertise in SQL for querying, manipulating, and analyzing large datasets.
Strong knowledge of ETL processes and data warehousing fundamentals.
Self-motivated and independent, with a “let's get this done” mindset and the ability to thrive in a fast-paced and dynamic environment.
Good-to-Have Skills:
Databricks Certification is a plus.
Data Modeling, Azure Architect Certification.
Data Analyst
Irving, TX jobs
Job Title: Marketing & Merchandise Analyst - C-Shopper
**This position is a 9 month contract opportunity that cannot support C2C or any form of sponsorship**
The Marketing & Merchandise Analyst will work across various C-Shopper development initiatives, partnering with the C-Shopper team, internal data teams, and Circana/IRI personnel. This role focuses on driving adoption and impact of the C-Shopper Customer Insights platform among internal and external users, delivering actionable insights to improve decision-making and business performance.
Key Responsibilities:
Platform Development & Adoption
Assist in C-Shopper platform enhancements to maximize value for internal and external stakeholders.
Act as a subject matter expert (SME) and Customer Success resource for the C-Shopper team.
Drive internal adoption of Customer Insights tools across Marketing, Merchandising, Loyalty, Operations, and Finance teams.
User Engagement & Training
Coordinate and conduct onsite and virtual meetings with internal teams.
Deliver training sessions and provide Help Desk support for assigned user groups.
Initiate ongoing interactions with user groups to share insights and best practices.
Analytics & Insights Delivery
Produce analytics projects and presentations to support internal and external business needs.
Provide guidance and case studies demonstrating high-value insights for user groups.
Partner with user teams to act as the voice of the customer, influencing customer-centric strategies.
Customer Success & Support
Manage onboarding and ongoing support strategies for internal users.
Support external supplier projects with ad hoc analytics and presentations.
Define and track metrics for program impact, customer satisfaction, and platform usage.
Continuous Improvement
Anticipate and remove barriers to project success.
Conduct evaluations and gather feedback from user groups to improve adoption.
Monitor market and customer trends to enhance user experience and operational excellence.
Qualifications:
Strong analytical and problem-solving skills.
Excellent communication and presentation abilities.
Ability to manage multiple projects and collaborate across teams.
Familiarity with customer insights platforms and retail analytics preferred.
Assistant Data Analyst
El Segundo, CA jobs
About the Company
Step into a high-impact Assistant Data Analyst role supporting a fast-growing consumer products business. You will work with data from more than 30 international markets and multiple business models, transforming complex datasets into strategic insights that leaders rely on. If you enjoy solving challenging problems, building powerful reports, and influencing decisions with data, this role gives you a chance to make your work visible at a global level.
About the Role
You will sit at the intersection of data, operations, and strategy-partnering with regional teams and senior stakeholders to ensure data is accurate, timely, and meaningful. This opportunity is ideal for someone who enjoys both the technical and business sides of analytics and expand experience working within international environment.
Responsibilities
Support and help oversee the collection and processing of data from 30+ countries, ensuring accuracy, consistency, and on-time delivery across all international channels.
Conduct comprehensive analysis on global sales and inventory data to develop recommendations for inventory optimization and improved business performance.
Collaborate with data analytics teams and regional stakeholders to ensure required data is captured within agreed timelines and defined standards.
Facilitate seamless data flow between systems and business users, enhancing accessibility, usability, and reliability of datasets.
Act as a liaison between the central data organization and various regional and business partners, clearly communicating data requirements, expectations, and deliverables.
Leverage advanced Excel functionality, including Power Pivot and data models, to build reports, dashboards, and analytical tools.
Independently manage projects of moderate complexity and provide business-focused support on larger, cross-functional data initiatives.
Prepare and deliver regular reports, insights, and strategic recommendations to senior leadership and other key stakeholders.
Ensure data quality controls are followed and contribute to continuous improvement of international data management processes.
Qualifications
Bachelor's degree in Business Analytics, International Business, Data Analytics, or a closely related field.
MBA or a Master's degree in Analytics or a related discipline preferred.
4-6 years of experience in data analysis and data management within a global or multi-region business environment.
Prior experience working with international data sources and stakeholders across varied business models.
Demonstrated track record of using data to support strategic business decision-making.
Required Skills
Advanced experience working with data models and complex data structures, particularly in large, multi-country environments.
Programming experience with Python for data processing, automation, and analysis tasks.
Comfortable working with large, complex datasets drawn from multiple business models and international sources.
Strong analytical capabilities with advanced proficiency in Power BI, Power Pivot, and Microsoft Excel.
Solid understanding of data management concepts and how they support broader business objectives.
Proven ability to interpret data and convert findings into clear, actionable business recommendations.
Effective project management skills, including planning, prioritizing, and executing moderately complex data projects.
Knowledge of international business structures (joint ventures, subsidiaries, distributors) and their differing data requirements.
Excellent written and verbal communication skills, including the ability to present complex analytical insights to non-technical stakeholders.
Ability to thrive in a dynamic, global environment and manage competing priorities.
Willingness to accommodate meetings and calls across multiple time zones as needed.
Preferred Skills
Advanced Microsoft Excel, including Power Pivot, data models, pivot tables, advanced formulas, and complex spreadsheet design.
Power BI (or similar BI tools) for building dashboards, visualizations, KPIs, and self-service reporting solutions.
Python for data manipulation, automation scripts, and analytical workflows.
Experience working with large data extracts from ERP, CRM, or data warehouse systems.
Strong proficiency with standard office productivity tools (Outlook, PowerPoint, Word) to support communication and presentation of analytics.
Pay range and compensation package
Pay Rate: $35-$40 per hour
Note: Must be ok to work onsite Monday through Friday 40hrs/ a week.
Equal Opportunity Statement
We are committed to diversity and inclusivity in our hiring practices.
Data Analyst III
Columbia, SC jobs
Hours/Schedule: Schedule: 8:30am to 5pm with a 30-minute lunch or 8 to 5 with an hour lunch - Potential OT.
Hybrid - Onsite 3x per week.
Day to day: There is a development and recurring or operational focus for the analyst. The development work is consultative, customer-facing, and requires understanding I/S business processes.
Work involves facilitating meetings, gathering and documenting requirements, interacting with management and multiple teams to complete work, designing and developing a solution, and presenting outcomes to customers.
This work requires a technical focus as well as a business perspective and consultative mindset. The recurring or operational work is recurring, repeatable tasks and includes one-time ad-hoc requests.
Recurring reports and data entry are well defined and are as automated as possible. Recurring work is reviewed annually at a minimum to ensure it continues to meet business needs.
This work requires knowledge of reporting and data tools and communication with customers.
The operations focuses on timeliness, accuracy, consistency, and providing the key customer insights and analysis to help with business needs and decisions.
Responsibilities:
Creates and analyzes reports to support operations.
Ensures correctness of analysis, and reports findings in a concise manner to senior management.
Directly responsible for accuracy of data as financial and operational decisions are made based on the data provided.
Generates internal and external reports to support management in determining productivity and efficiencies of programs or operational processes. Revises existing reports and develops new reports based on changing methodologies.
Analyzes reports to ensure accuracy and quality. Tracks and verifies all reporting statistics.
Communicates and trains employees and managers on the complex database programs used to generate analytical data.
Designs, codes, and maintains complex database programs for the extraction and analysis of data to support financial and operational decisions.
Experience:
4 Years Research and analysis experience.
Skills:
Strong organizational, customer service, communications, and analytical skills.
Advanced experience using complex mathematical calculations and understand mathematical and statistical concepts.
Knowledge of relevant computer support systems.
Ability to train subordinate staff including provide assistance/guidance to staff in design/execution of reporting needs.
Proven experience with report writing and technical requirements analysis, data and business process modeling/mapping, and methodology development.
Strong understanding of relational database structures, theories, principles, and practices.
Required Software and Other Tools: Advanced knowledge of Microsoft Office. Knowledge of programming languages across various software platforms, using DB2, SQL, and/or relational databases. Knowledge of tools such as Visual Basic and Macros useful in automating reporting processes.
Education:
Bachelor's degree in Statistics, Computer Science, Mathematics, Business, Healthcare, or other related field.
About US Tech Solutions:
US Tech Solutions is a global staff augmentation firm providing a wide range of talent on-demand and total workforce solutions. To know more about US Tech Solutions, please visit ************************
US Tech Solutions is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran.
Recruiter Details:
Name: Shailesh
Email: *********************************
Internal Id: 25-53080
Lead Data Architect
Tempe, AZ jobs
We are seeking a Lead Data Architect to drive the design and implementation of our enterprise data architecture with a focus on Azure Data Lake, Databricks, and Lakehouse architecture. This role will serve as the data design authority, ensuring alignment with enterprise standards while enabling business value through scalable, high-quality data solutions.
The ideal candidate will have a proven track record in financial services or wealth management, deep expertise in data modeling and MDM (e.g., Profisee), and experience architecting cloud-native data platforms that support analytics, AI/ML, and regulatory/compliance requirements.
Key Responsibilities
Define and own the enterprise data architecture strategy, standards, and patterns.
Lead the design and implementation of Azure-based Lakehouse architecture leveraging Azure Data Lake, Databricks, Delta Lake, and related services.
Serve as the data design authority, governing data models, integration patterns, metadata management, and data quality standards.
Architect and implement Master Data Management (MDM) solutions, preferably with Profisee.
Collaborate with stakeholders, engineers, and analysts to translate business requirements into scalable architecture and data models.
Ensure alignment with data governance, security, and compliance frameworks.
Provide technical leadership in data design, ETL/ELT best practices, and performance optimization.
Partner with enterprise and solution architects to integrate data architecture with application and cloud strategies.
Mentor and guide data engineers and modelers, fostering a culture of engineering and architecture excellence.
Required Qualifications
10+ years of experience in data architecture, data engineering, or related fields, with 5+ years in a lead/architect capacity.
Strong expertise in Azure Data Lake, Databricks, Delta Lake, and Lakehouse architecture.
Hands-on experience architecting and implementing MDM solutions (Profisee strongly preferred).
Deep knowledge of data modeling (conceptual, logical, physical) and metadata management.
Experience as a data design authority across enterprise programs.
Strong understanding of financial services data domains (clients, accounts, portfolios, products, transactions) and regulatory needs.
Proficiency in SQL, Python, Spark, and modern ELT/ETL tools.
Familiarity with data governance, lineage, cataloging, and data quality tools.
Excellent communication and leadership skills to engage with senior business and technology stakeholders.
Preferred Qualifications
Experience with real-time data streaming (Kafka, Event Hub).
Exposure to BI/Analytics platforms (Power BI, Tableau) integrated with Lakehouse.
Knowledge of data security and privacy frameworks in financial services.
Cloud certification in Microsoft Azure Data Engineering/Architecture.
Benefits
Comprehensive health, vision, and dental coverage.
401(k) plans plus a variety of voluntary plans such as legal services, insurance, and more.
👉 If you're a data architecture leader who thrives on building scalable, cloud-native data platforms and want to make an impact in financial services, we'd love to connect.
System Integration Architect
Chicago, IL jobs
Client : Airlines/Aerospace/Aviation
Title : Workday Integration Architect/System Integration Architect/Integration Architect/System Architect
Duration : 12 Months
:
Top 3 skill sets required for this role:
1. Service Now HR vertical technical and functional skills
2. Workday and third-party integrations to Service Now
3. Communication and Stakeholder management
Nice to have skills or certifications:
1. Experience with AI/SNOW Virtual Agents
2. Employee portal design and development
3. Conflict Management
Job Description:
2 phases - service now to work day migration -ideally complete by 2026
- Deep technical skills with Service now and Workday
- Strong communication - they are in HR space- stakeholder management and explain concepts to people
- Team environment- but they need to drive the decisions -
- Day-to-day- development skills
- Purpose of role - new project - augment the architect skills - system implementation partners for integrations
- cloud background and 2 platform backgrounds - mid range or senior - who has done this before - someone who can advise what to look out for - they will be working on new employee portal - will continue to advance (phase 2).
Service now in the HR vertical, integrations, AI and search (how to maximize), communication skills, time management, ok - without Workday but other migrations.
Mindlance is an Equal Opportunity Employer and does not discriminate in employment on the basis of - Minority/Gender/Disability/Religion/LGBTQI/Age/Veterans.