Post job

Data Architect jobs at Cherry Bekaert

- 7814 jobs
  • Data Modeler II

    Airswift 4.9company rating

    Houston, TX jobs

    Job Title: Data Modeler II Type: W2 Contract (USA)/INC or T4 (Canada) Work Setup: Hybrid (On-site with flexibility to work from home two days per week) Industry: Oil & Gas Benefits: Health, Dental, Vision Job Summary We are seeking a Data Modeler II with a product-driven, innovative mindset to design and implement data solutions that deliver measurable business value for Supply Chain operations. This role combines technical expertise with project management responsibilities, requiring collaboration with IT teams to develop solutions for small and medium-sized business challenges. The ideal candidate will have hands-on experience with data transformation, AI integration, and ERP systems, while also being able to communicate technical concepts in clear, business-friendly language. Key Responsibilities Develop innovative data solutions leveraging knowledge of Supply Chain processes and oil & gas industry value drivers. Design and optimize ETL pipelines for scalable, high-performance data processing. Integrate solutions with enterprise data platforms and visualization tools. Gather and clean data from ERP systems for analytics and reporting. Utilize AI tools and prompt engineering to enhance data-driven solutions. Collaborate with IT and business stakeholders to deliver medium and low-level solutions for local issues. Oversee project timelines, resources, and stakeholder engagement. Document project objectives, requirements, and progress updates. Translate technical language into clear, non-technical terms for business users. Support continuous improvement and innovation in data engineering and analytics. Basic / Required Qualifications Bachelor's degree in Commerce (SCM), Data Science, Engineering, or related field. Hands-on experience with: Python for data transformation. ETL tools (Power Automate, Power Apps; Databricks is a plus). Oracle Cloud (Supply Chain and Financial modules). Knowledge of ERP systems (Oracle Cloud required; SAP preferred). Familiarity with AI integration and low-code development platforms. Strong understanding of Supply Chain processes; oil & gas experience preferred. Ability to manage projects and engage stakeholders effectively. Excellent communication skills for translating technical concepts into business language. Required Knowledge / Skills / Abilities Advanced proficiency in data science concepts, including statistical analysis and machine learning. Experience with prompt engineering and AI-driven solutions. Ability to clean and transform data for analytics and reporting. Strong documentation, troubleshooting, and analytical skills. Business-focused mindset with technical expertise. Ability to think outside the box and propose innovative solutions. Special Job Characteristics Hybrid work schedule (Wednesdays and Fridays remote). Ability to work independently and oversee own projects.
    $82k-115k yearly est. 4d ago
  • Oracle Data Analyst (Exadata)

    Yoh, A Day & Zimmermann Company 4.7company rating

    Dallas, TX jobs

    6+ month contract Downtown Dallas, TX (Onsite) Primary responsibilities of the Senior Data Analyst include supporting and analyzing data anomalies for multiple environments including but not limited to Data Warehouse, ODS, Data Replication/ETL Data Management initiatives. The candidate will be in a supporting role and will work closely with Business, DBA, ETL and Data Management team providing analysis and support for complex Data related initiatives. This individual will also be responsible for assisting in initial setup and on-going documentation/configuration related to Data Governance and Master Data Management solutions. This candidate must have a passion for data, along with good SQL, analytical and communication skills. Responsibilities Investigate and Analyze data anomalies and data issues reported by Business Work with ETL, Replication and DBA teams to determine data transformations, data movement and derivations and document accordingly Work with support teams to ensure consistent and proactive support methodologies are adhered to for all aspects of data movements and data transformations Assist in break fix and production validation as it relates to data derivations, replication and structures Assist in configuration and on-going setup of Data Virtualization and Master Data Management tools Assist in keeping documentation up to date as it relates to Data Standardization definitions, Data Dictionary and Data Lineage Gather information from various Sources and interpret Patterns and Trends Ability to work in a team-oriented, fast-paced agile environment managing multiple priorities Qualifications 4+ years of experience working in OLTP, Data Warehouse and Big Data databases 4+ years of experience working with Oracle Exadata 4+ years in a Data Analyst role 2+ years writing medium to complex stored procedures a plus Ability to collaborate effectively and work as part of a team Extensive background in writing complex queries Extensive working knowledge of all aspects of Data Movement and Processing, including ETL, API, OLAP and best practices for data tracking Denodo Experience a plus Master Data Management a plus Big Data Experience a plus (Hadoop, MongoDB) Postgres and Cloud Experience a plus Estimated Min Rate: $57.40 Estimated Max Rate: $82.00 What's In It for You? We welcome you to be a part of the largest and legendary global staffing companies to meet your career aspirations. Yoh's network of client companies has been employing professionals like you for over 65 years in the U.S., UK and Canada. Join Yoh's extensive talent community that will provide you with access to Yoh's vast network of opportunities and gain access to this exclusive opportunity available to you. Benefit eligibility is in accordance with applicable laws and client requirements. Benefits include: Medical, Prescription, Dental & Vision Benefits (for employees working 20+ hours per week) Health Savings Account (HSA) (for employees working 20+ hours per week) Life & Disability Insurance (for employees working 20+ hours per week) MetLife Voluntary Benefits Employee Assistance Program (EAP) 401K Retirement Savings Plan Direct Deposit & weekly epayroll Referral Bonus Programs Certification and training opportunities Note: Any pay ranges displayed are estimations. Actual pay is determined by an applicant's experience, technical expertise, and other qualifications as listed in the job description. All qualified applicants are welcome to apply. Yoh, a Day & Zimmermann company, is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran. Visit ************************************************ to contact us if you are an individual with a disability and require accommodation in the application process. For California applicants, qualified applicants with arrest or conviction records will be considered for employment in accordance with the Los Angeles County Fair Chance Ordinance for Employers and the California Fair Chance Act. All of the material job duties described in this posting are job duties for which a criminal history may have a direct, adverse, and negative relationship potentially resulting in the withdrawal of a conditional offer of employment. It is unlawful in Massachusetts to require or administer a lie detector test as a condition of employment or continued employment. An employer who violates this law shall be subject to criminal penalties and civil liability. By applying and submitting your resume, you authorize Yoh to review and reformat your resume to meet Yoh's hiring clients' preferences. To learn more about Yoh's privacy practices, please see our Candidate Privacy Notice: **********************************
    $57.4 hourly 1d ago
  • Data Modeler

    Airswift 4.9company rating

    Midland, TX jobs

    Job Title: Data Modeler - Net Zero Program Analyst Type: W2 Contract (12-month duration) Work Setup: On-site Industry: Oil & Gas Benefits: Dental, Healthcare, Vision &401(k) Airswift is seeking a Data Modeler - Net Zero Program Analyst to join one of our major clients on a 12-month contract. This newly created role supports the company's decarbonization and Net Zero initiatives by managing and analyzing operational data to identify trends and optimize performance. The position involves working closely with operations and analytics teams to deliver actionable insights through data visualization and reporting. Responsibilities: Build and maintain Power BI dashboards to monitor emissions, operational metrics, and facility performance. Extract and organize data from systems such as SiteView, ProCount, and SAP for analysis and reporting. Conduct data validation and trend analysis to support sustainability and operational goals. Collaborate with field operations and project teams to interpret data and provide recommendations. Ensure data consistency across platforms and assist with integration efforts (coordination only, no coding required). Present findings through clear reports and visualizations for technical and non-technical stakeholders. Required Skills and Experience: 7+ years of experience in data analysis within Oil & Gas or Energy sectors. Strong proficiency in Power BI (required). Familiarity with SiteView, ProCount, and/or SAP (preferred). Ability to translate operational data into insights that support emissions reduction and facility optimization. Experience with surface facilities, emissions estimation, or power systems. Knowledge of other visualization tools (Tableau, Spotfire) is a plus. High School Diploma or GED required. Additional Details: Preference for Midland-based candidates; Houston-based candidates will need to travel to Midland periodically (travel reimbursed). No per diem offered. Office-based role with low exposure risk.
    $83k-116k yearly est. 1d ago
  • Financial Data Analyst

    Genpact 4.4company rating

    Alpharetta, GA jobs

    Ready to build the future with AI? At Genpact, we don't just keep up with technology-we set the pace. AI and digital innovation are redefining industries, and we're leading the charge. Genpact's AI Gigafactory, our industry-first accelerator, is an example of how we're scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI, our breakthrough solutions tackle companies' most complex challenges. If you thrive in a fast-moving, innovation-driven environment, love building and deploying cutting-edge AI solutions, and want to push the boundaries of what's possible, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation, our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn, X, YouTube, and Facebook. Inviting applications for the role of Financial Data Analyst at Alpharetta , GA . Role : Financial Data Analyst Location : Alpharetta , GA 30005 / 3 days from Office Hiring Type: Fulltime with Genpact + Benefits Responsibilities Define and execute the product roadmap for AI tooling and data integration initiatives, driving products from concept to launch in a fast-paced, Agile environment. Translate business needs and product strategy into detailed requirements and user stories. Collaborate with engineering, data, and AI/ML teams to design and implement data connectors that enable seamless access to internal and external financial datasets. Partner with data engineering teams to ensure reliable data ingestion, transformation, and availability for analytics and AI models. Evaluate and work to onboard new data sources, ensuring accuracy, consistency, and completeness of fundamental and financial data. Continuously assess opportunities to enhance data coverage, connectivity, and usability within AI and analytics platforms. Monitor and analyze product performance post-launch to drive ongoing optimization and inform future investments. Facilitate alignment across stakeholders, including engineering, research, analytics, and business partners, ensuring clear communication and prioritization. Minimum qualifications Bachelor's degree in Computer Science, Finance, or related discipline. MBA/Master's Degree desired. 5+ years of experience in a similar role Strong understanding of fundamental and financial datasets, including company financials, market data, and research data. Proven experience in data integration, particularly using APIs, data connectors, or ETL frameworks to enable AI or analytics use cases. Familiarity with AI/ML data pipelines, model lifecycle, and related tooling. Experience working with cross-functional teams in an Agile environment. Strong analytical, problem-solving, and communication skills with the ability to translate complex concepts into actionable insights. Prior experience in financial services, investment banking, or research domains. Excellent organizational and stakeholder management abilities with a track record of delivering data-driven products. Preferred qualifications Deep understanding of Python, SQL, or similar scripting languages Knowledge of cloud data platforms (AWS, GCP, or Azure) and modern data architectures (data lakes, warehouses, streaming) Familiarity with AI/ML platforms Understanding of data governance, metadata management, and data security best practices in financial environments. Experience with API standards (REST, GraphQL) and data integration frameworks. Demonstrated ability to partner with engineering and data science teams to operationalize AI initiatives. Why join Genpact? • Lead AI-first transformation - Build and scale AI solutions that redefine industries • Make an impact - Drive change for global enterprises and solve business challenges that matter • Accelerate your career-Gain hands-on experience, world-class training, mentorship, and AI certifications to advance your skills • Grow with the best - Learn from top engineers, data scientists, and AI experts in a dynamic, fast-moving workplace • Committed to ethical AI - Work in an environment where governance, transparency, and security are at the core of everything webuild • Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the 140,000+ coders, tech shapers, and growth makers at Genpact and take your career in the only direction that matters: Up. Let's build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training.
    $69k-84k yearly est. 1d ago
  • AWS Data Architect

    Fractal 4.2company rating

    San Jose, CA jobs

    Fractal is a strategic AI partner to Fortune 500 companies with a vision to power every human decision in the enterprise. Fractal is building a world where individual choices, freedom, and diversity are the greatest assets; an ecosystem where human imagination is at the heart of every decision. Where no possibility is written off, only challenged to get better. We believe that a true Fractalite is the one who empowers imagination with intelligence. Fractal has been featured as a Great Place to Work by The Economic Times in partnership with the Great Place to Work Institute and recognized as a ‘Cool Vendor' and a ‘Vendor to Watch' by Gartner. Please visit Fractal | Intelligence for Imagination for more information about Fractal. Fractal is looking for a proactive and driven AWS Lead Data Architect/Engineer to join our cloud and data tech team. In this role, you will work on designing the system architecture and solution, ensuring the platform is scalable while performant, and creating automated data pipelines. Responsibilities: Design & Architecture of Scalable Data Platforms Design, develop, and maintain large-scale data processing architectures on the Databricks Lakehouse Platform to support business needs Architect multi-layer data models including Bronze (raw), Silver (cleansed), and Gold (curated) layers for various domains (e.g., Retail Execution, Digital Commerce, Logistics, Category Management). Leverage Delta Lake, Unity Catalog, and advanced features of Databricks for governed data sharing, versioning, and reproducibility. Client & Business Stakeholder Engagement Partner with business stakeholders to translate functional requirements into scalable technical solutions. Conduct architecture workshops and solutioning sessions with enterprise IT and business teams to define data-driven use cases Data Pipeline Development & Collaboration Collaborate with data engineers and data scientists to develop end-to-end pipelines using Python, PySpark, SQL Enable data ingestion from diverse sources such as ERP (SAP), POS data, Syndicated Data, CRM, e-commerce platforms, and third-party datasets. Performance, Scalability, and Reliability Optimize Spark jobs for performance tuning, cost efficiency, and scalability by configuring appropriate cluster sizing, caching, and query optimization techniques. Implement monitoring and alerting using Databricks Observability, Ganglia, Cloud-native tools Security, Compliance & Governance Design secure architectures using Unity Catalog, role-based access control (RBAC), encryption, token-based access, and data lineage tools to meet compliance policies. Establish data governance practices including Data Fitness Index, Quality Scores, SLA Monitoring, and Metadata Cataloging. Adoption of AI Copilots & Agentic Development Utilize GitHub Copilot, Databricks Assistant, and other AI code agents for Writing PySpark, SQL, and Python code snippets for data engineering and ML tasks. Generating documentation and test cases to accelerate pipeline development. Interactive debugging and iterative code optimization within notebooks. Advocate for agentic AI workflows that use specialized agents for Data profiling and schema inference. Automated testing and validation. Innovation and Continuous Learning Stay abreast of emerging trends in Lakehouse architectures, Generative AI, and cloud-native tooling. Evaluate and pilot new features from Databricks releases and partner integrations for modern data stack improvements. Requirements: Bachelor's or master's degree in computer science, Information Technology, or a related field. 8-12 years of hands-on experience in data engineering, with at least 5+ years on Python and Apache Spark. Expertise in building high-throughput, low-latency ETL/ELT pipelines on AWS/Azure/GCP using Python, PySpark, SQL. Excellent hands on experience with workload automation tools such as Airflow, Prefect etc. Familiarity with building dynamic ingestion frameworks from structured/unstructured data sources including APIs, flat files, RDBMS, and cloud storage Experience designing Lakehouse architectures with bronze, silver, gold layering. Strong understanding of data modelling concepts, star/snowflake schemas, dimensional modelling, and modern cloud-based data warehousing. Experience with designing Data marts using Cloud data warehouses and integrating with BI tools (Power BI, Tableau, etc.). Experience CI/CD pipelines using tools such as AWS Code commit, Azure DevOps, GitHub Actions. Knowledge of infrastructure-as-code (Terraform, ARM templates) for provisioning platform resources In-depth experience with AWS Cloud services such as Glue, S3, Redshift etc. Strong understanding of data privacy, access controls, and governance best practices. Experience working with RBAC, tokenization, and data classification frameworks Excellent communication skills for stakeholder interaction, solution presentations, and team coordination. Proven experience leading or mentoring global, cross-functional teams across multiple time zones and engagements. Ability to work independently in agile or hybrid delivery models, while guiding junior engineers and ensuring solution quality Must be able to work in PST time zone. Pay: The wage range for this role takes into account the wide range of factors that are considered in making compensation decisions including but not limited to skill sets; experience and training; licensure and certifications; and other business and organizational needs. The disclosed range estimate has not been adjusted for the applicable geographic differential associated with the location at which the position may be filled. At Fractal, it is not typical for an individual to be hired at or near the top of the range for their role and compensation decisions are dependent on the facts and circumstances of each case. A reasonable estimate of the current range is: $150k - $180k. In addition, you may be eligible for a discretionary bonus for the current performance period. Benefits: As a full-time employee of the company or as an hourly employee working more than 30 hours per week, you will be eligible to participate in the health, dental, vision, life insurance, and disability plans in accordance with the plan documents, which may be amended from time to time. You will be eligible for benefits on the first day of employment with the Company. In addition, you are eligible to participate in the Company 401(k) Plan after 30 days of employment, in accordance with the applicable plan terms. The Company provides for 11 paid holidays and 12 weeks of Parental Leave. We also follow a “free time” PTO policy, allowing you the flexibility to take the time needed for either sick time or vacation. Fractal provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
    $150k-180k yearly 1d ago
  • AWS Data Architect

    Fractal 4.2company rating

    Santa Rosa, CA jobs

    Fractal is a strategic AI partner to Fortune 500 companies with a vision to power every human decision in the enterprise. Fractal is building a world where individual choices, freedom, and diversity are the greatest assets; an ecosystem where human imagination is at the heart of every decision. Where no possibility is written off, only challenged to get better. We believe that a true Fractalite is the one who empowers imagination with intelligence. Fractal has been featured as a Great Place to Work by The Economic Times in partnership with the Great Place to Work Institute and recognized as a ‘Cool Vendor' and a ‘Vendor to Watch' by Gartner. Please visit Fractal | Intelligence for Imagination for more information about Fractal. Fractal is looking for a proactive and driven AWS Lead Data Architect/Engineer to join our cloud and data tech team. In this role, you will work on designing the system architecture and solution, ensuring the platform is scalable while performant, and creating automated data pipelines. Responsibilities: Design & Architecture of Scalable Data Platforms Design, develop, and maintain large-scale data processing architectures on the Databricks Lakehouse Platform to support business needs Architect multi-layer data models including Bronze (raw), Silver (cleansed), and Gold (curated) layers for various domains (e.g., Retail Execution, Digital Commerce, Logistics, Category Management). Leverage Delta Lake, Unity Catalog, and advanced features of Databricks for governed data sharing, versioning, and reproducibility. Client & Business Stakeholder Engagement Partner with business stakeholders to translate functional requirements into scalable technical solutions. Conduct architecture workshops and solutioning sessions with enterprise IT and business teams to define data-driven use cases Data Pipeline Development & Collaboration Collaborate with data engineers and data scientists to develop end-to-end pipelines using Python, PySpark, SQL Enable data ingestion from diverse sources such as ERP (SAP), POS data, Syndicated Data, CRM, e-commerce platforms, and third-party datasets. Performance, Scalability, and Reliability Optimize Spark jobs for performance tuning, cost efficiency, and scalability by configuring appropriate cluster sizing, caching, and query optimization techniques. Implement monitoring and alerting using Databricks Observability, Ganglia, Cloud-native tools Security, Compliance & Governance Design secure architectures using Unity Catalog, role-based access control (RBAC), encryption, token-based access, and data lineage tools to meet compliance policies. Establish data governance practices including Data Fitness Index, Quality Scores, SLA Monitoring, and Metadata Cataloging. Adoption of AI Copilots & Agentic Development Utilize GitHub Copilot, Databricks Assistant, and other AI code agents for Writing PySpark, SQL, and Python code snippets for data engineering and ML tasks. Generating documentation and test cases to accelerate pipeline development. Interactive debugging and iterative code optimization within notebooks. Advocate for agentic AI workflows that use specialized agents for Data profiling and schema inference. Automated testing and validation. Innovation and Continuous Learning Stay abreast of emerging trends in Lakehouse architectures, Generative AI, and cloud-native tooling. Evaluate and pilot new features from Databricks releases and partner integrations for modern data stack improvements. Requirements: Bachelor's or master's degree in computer science, Information Technology, or a related field. 8-12 years of hands-on experience in data engineering, with at least 5+ years on Python and Apache Spark. Expertise in building high-throughput, low-latency ETL/ELT pipelines on AWS/Azure/GCP using Python, PySpark, SQL. Excellent hands on experience with workload automation tools such as Airflow, Prefect etc. Familiarity with building dynamic ingestion frameworks from structured/unstructured data sources including APIs, flat files, RDBMS, and cloud storage Experience designing Lakehouse architectures with bronze, silver, gold layering. Strong understanding of data modelling concepts, star/snowflake schemas, dimensional modelling, and modern cloud-based data warehousing. Experience with designing Data marts using Cloud data warehouses and integrating with BI tools (Power BI, Tableau, etc.). Experience CI/CD pipelines using tools such as AWS Code commit, Azure DevOps, GitHub Actions. Knowledge of infrastructure-as-code (Terraform, ARM templates) for provisioning platform resources In-depth experience with AWS Cloud services such as Glue, S3, Redshift etc. Strong understanding of data privacy, access controls, and governance best practices. Experience working with RBAC, tokenization, and data classification frameworks Excellent communication skills for stakeholder interaction, solution presentations, and team coordination. Proven experience leading or mentoring global, cross-functional teams across multiple time zones and engagements. Ability to work independently in agile or hybrid delivery models, while guiding junior engineers and ensuring solution quality Must be able to work in PST time zone. Pay: The wage range for this role takes into account the wide range of factors that are considered in making compensation decisions including but not limited to skill sets; experience and training; licensure and certifications; and other business and organizational needs. The disclosed range estimate has not been adjusted for the applicable geographic differential associated with the location at which the position may be filled. At Fractal, it is not typical for an individual to be hired at or near the top of the range for their role and compensation decisions are dependent on the facts and circumstances of each case. A reasonable estimate of the current range is: $150k - $180k. In addition, you may be eligible for a discretionary bonus for the current performance period. Benefits: As a full-time employee of the company or as an hourly employee working more than 30 hours per week, you will be eligible to participate in the health, dental, vision, life insurance, and disability plans in accordance with the plan documents, which may be amended from time to time. You will be eligible for benefits on the first day of employment with the Company. In addition, you are eligible to participate in the Company 401(k) Plan after 30 days of employment, in accordance with the applicable plan terms. The Company provides for 11 paid holidays and 12 weeks of Parental Leave. We also follow a “free time” PTO policy, allowing you the flexibility to take the time needed for either sick time or vacation. Fractal provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
    $150k-180k yearly 1d ago
  • AWS Data Architect

    Fractal 4.2company rating

    San Francisco, CA jobs

    Fractal is a strategic AI partner to Fortune 500 companies with a vision to power every human decision in the enterprise. Fractal is building a world where individual choices, freedom, and diversity are the greatest assets; an ecosystem where human imagination is at the heart of every decision. Where no possibility is written off, only challenged to get better. We believe that a true Fractalite is the one who empowers imagination with intelligence. Fractal has been featured as a Great Place to Work by The Economic Times in partnership with the Great Place to Work Institute and recognized as a ‘Cool Vendor' and a ‘Vendor to Watch' by Gartner. Please visit Fractal | Intelligence for Imagination for more information about Fractal. Fractal is looking for a proactive and driven AWS Lead Data Architect/Engineer to join our cloud and data tech team. In this role, you will work on designing the system architecture and solution, ensuring the platform is scalable while performant, and creating automated data pipelines. Responsibilities: Design & Architecture of Scalable Data Platforms Design, develop, and maintain large-scale data processing architectures on the Databricks Lakehouse Platform to support business needs Architect multi-layer data models including Bronze (raw), Silver (cleansed), and Gold (curated) layers for various domains (e.g., Retail Execution, Digital Commerce, Logistics, Category Management). Leverage Delta Lake, Unity Catalog, and advanced features of Databricks for governed data sharing, versioning, and reproducibility. Client & Business Stakeholder Engagement Partner with business stakeholders to translate functional requirements into scalable technical solutions. Conduct architecture workshops and solutioning sessions with enterprise IT and business teams to define data-driven use cases Data Pipeline Development & Collaboration Collaborate with data engineers and data scientists to develop end-to-end pipelines using Python, PySpark, SQL Enable data ingestion from diverse sources such as ERP (SAP), POS data, Syndicated Data, CRM, e-commerce platforms, and third-party datasets. Performance, Scalability, and Reliability Optimize Spark jobs for performance tuning, cost efficiency, and scalability by configuring appropriate cluster sizing, caching, and query optimization techniques. Implement monitoring and alerting using Databricks Observability, Ganglia, Cloud-native tools Security, Compliance & Governance Design secure architectures using Unity Catalog, role-based access control (RBAC), encryption, token-based access, and data lineage tools to meet compliance policies. Establish data governance practices including Data Fitness Index, Quality Scores, SLA Monitoring, and Metadata Cataloging. Adoption of AI Copilots & Agentic Development Utilize GitHub Copilot, Databricks Assistant, and other AI code agents for Writing PySpark, SQL, and Python code snippets for data engineering and ML tasks. Generating documentation and test cases to accelerate pipeline development. Interactive debugging and iterative code optimization within notebooks. Advocate for agentic AI workflows that use specialized agents for Data profiling and schema inference. Automated testing and validation. Innovation and Continuous Learning Stay abreast of emerging trends in Lakehouse architectures, Generative AI, and cloud-native tooling. Evaluate and pilot new features from Databricks releases and partner integrations for modern data stack improvements. Requirements: Bachelor's or master's degree in computer science, Information Technology, or a related field. 8-12 years of hands-on experience in data engineering, with at least 5+ years on Python and Apache Spark. Expertise in building high-throughput, low-latency ETL/ELT pipelines on AWS/Azure/GCP using Python, PySpark, SQL. Excellent hands on experience with workload automation tools such as Airflow, Prefect etc. Familiarity with building dynamic ingestion frameworks from structured/unstructured data sources including APIs, flat files, RDBMS, and cloud storage Experience designing Lakehouse architectures with bronze, silver, gold layering. Strong understanding of data modelling concepts, star/snowflake schemas, dimensional modelling, and modern cloud-based data warehousing. Experience with designing Data marts using Cloud data warehouses and integrating with BI tools (Power BI, Tableau, etc.). Experience CI/CD pipelines using tools such as AWS Code commit, Azure DevOps, GitHub Actions. Knowledge of infrastructure-as-code (Terraform, ARM templates) for provisioning platform resources In-depth experience with AWS Cloud services such as Glue, S3, Redshift etc. Strong understanding of data privacy, access controls, and governance best practices. Experience working with RBAC, tokenization, and data classification frameworks Excellent communication skills for stakeholder interaction, solution presentations, and team coordination. Proven experience leading or mentoring global, cross-functional teams across multiple time zones and engagements. Ability to work independently in agile or hybrid delivery models, while guiding junior engineers and ensuring solution quality Must be able to work in PST time zone. Pay: The wage range for this role takes into account the wide range of factors that are considered in making compensation decisions including but not limited to skill sets; experience and training; licensure and certifications; and other business and organizational needs. The disclosed range estimate has not been adjusted for the applicable geographic differential associated with the location at which the position may be filled. At Fractal, it is not typical for an individual to be hired at or near the top of the range for their role and compensation decisions are dependent on the facts and circumstances of each case. A reasonable estimate of the current range is: $150k - $180k. In addition, you may be eligible for a discretionary bonus for the current performance period. Benefits: As a full-time employee of the company or as an hourly employee working more than 30 hours per week, you will be eligible to participate in the health, dental, vision, life insurance, and disability plans in accordance with the plan documents, which may be amended from time to time. You will be eligible for benefits on the first day of employment with the Company. In addition, you are eligible to participate in the Company 401(k) Plan after 30 days of employment, in accordance with the applicable plan terms. The Company provides for 11 paid holidays and 12 weeks of Parental Leave. We also follow a “free time” PTO policy, allowing you the flexibility to take the time needed for either sick time or vacation. Fractal provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
    $150k-180k yearly 2d ago
  • AWS Data Architect

    Fractal 4.2company rating

    Sunnyvale, CA jobs

    Fractal is a strategic AI partner to Fortune 500 companies with a vision to power every human decision in the enterprise. Fractal is building a world where individual choices, freedom, and diversity are the greatest assets; an ecosystem where human imagination is at the heart of every decision. Where no possibility is written off, only challenged to get better. We believe that a true Fractalite is the one who empowers imagination with intelligence. Fractal has been featured as a Great Place to Work by The Economic Times in partnership with the Great Place to Work Institute and recognized as a ‘Cool Vendor' and a ‘Vendor to Watch' by Gartner. Please visit Fractal | Intelligence for Imagination for more information about Fractal. Fractal is looking for a proactive and driven AWS Lead Data Architect/Engineer to join our cloud and data tech team. In this role, you will work on designing the system architecture and solution, ensuring the platform is scalable while performant, and creating automated data pipelines. Responsibilities: Design & Architecture of Scalable Data Platforms Design, develop, and maintain large-scale data processing architectures on the Databricks Lakehouse Platform to support business needs Architect multi-layer data models including Bronze (raw), Silver (cleansed), and Gold (curated) layers for various domains (e.g., Retail Execution, Digital Commerce, Logistics, Category Management). Leverage Delta Lake, Unity Catalog, and advanced features of Databricks for governed data sharing, versioning, and reproducibility. Client & Business Stakeholder Engagement Partner with business stakeholders to translate functional requirements into scalable technical solutions. Conduct architecture workshops and solutioning sessions with enterprise IT and business teams to define data-driven use cases Data Pipeline Development & Collaboration Collaborate with data engineers and data scientists to develop end-to-end pipelines using Python, PySpark, SQL Enable data ingestion from diverse sources such as ERP (SAP), POS data, Syndicated Data, CRM, e-commerce platforms, and third-party datasets. Performance, Scalability, and Reliability Optimize Spark jobs for performance tuning, cost efficiency, and scalability by configuring appropriate cluster sizing, caching, and query optimization techniques. Implement monitoring and alerting using Databricks Observability, Ganglia, Cloud-native tools Security, Compliance & Governance Design secure architectures using Unity Catalog, role-based access control (RBAC), encryption, token-based access, and data lineage tools to meet compliance policies. Establish data governance practices including Data Fitness Index, Quality Scores, SLA Monitoring, and Metadata Cataloging. Adoption of AI Copilots & Agentic Development Utilize GitHub Copilot, Databricks Assistant, and other AI code agents for Writing PySpark, SQL, and Python code snippets for data engineering and ML tasks. Generating documentation and test cases to accelerate pipeline development. Interactive debugging and iterative code optimization within notebooks. Advocate for agentic AI workflows that use specialized agents for Data profiling and schema inference. Automated testing and validation. Innovation and Continuous Learning Stay abreast of emerging trends in Lakehouse architectures, Generative AI, and cloud-native tooling. Evaluate and pilot new features from Databricks releases and partner integrations for modern data stack improvements. Requirements: Bachelor's or master's degree in computer science, Information Technology, or a related field. 8-12 years of hands-on experience in data engineering, with at least 5+ years on Python and Apache Spark. Expertise in building high-throughput, low-latency ETL/ELT pipelines on AWS/Azure/GCP using Python, PySpark, SQL. Excellent hands on experience with workload automation tools such as Airflow, Prefect etc. Familiarity with building dynamic ingestion frameworks from structured/unstructured data sources including APIs, flat files, RDBMS, and cloud storage Experience designing Lakehouse architectures with bronze, silver, gold layering. Strong understanding of data modelling concepts, star/snowflake schemas, dimensional modelling, and modern cloud-based data warehousing. Experience with designing Data marts using Cloud data warehouses and integrating with BI tools (Power BI, Tableau, etc.). Experience CI/CD pipelines using tools such as AWS Code commit, Azure DevOps, GitHub Actions. Knowledge of infrastructure-as-code (Terraform, ARM templates) for provisioning platform resources In-depth experience with AWS Cloud services such as Glue, S3, Redshift etc. Strong understanding of data privacy, access controls, and governance best practices. Experience working with RBAC, tokenization, and data classification frameworks Excellent communication skills for stakeholder interaction, solution presentations, and team coordination. Proven experience leading or mentoring global, cross-functional teams across multiple time zones and engagements. Ability to work independently in agile or hybrid delivery models, while guiding junior engineers and ensuring solution quality Must be able to work in PST time zone. Pay: The wage range for this role takes into account the wide range of factors that are considered in making compensation decisions including but not limited to skill sets; experience and training; licensure and certifications; and other business and organizational needs. The disclosed range estimate has not been adjusted for the applicable geographic differential associated with the location at which the position may be filled. At Fractal, it is not typical for an individual to be hired at or near the top of the range for their role and compensation decisions are dependent on the facts and circumstances of each case. A reasonable estimate of the current range is: $150k - $180k. In addition, you may be eligible for a discretionary bonus for the current performance period. Benefits: As a full-time employee of the company or as an hourly employee working more than 30 hours per week, you will be eligible to participate in the health, dental, vision, life insurance, and disability plans in accordance with the plan documents, which may be amended from time to time. You will be eligible for benefits on the first day of employment with the Company. In addition, you are eligible to participate in the Company 401(k) Plan after 30 days of employment, in accordance with the applicable plan terms. The Company provides for 11 paid holidays and 12 weeks of Parental Leave. We also follow a “free time” PTO policy, allowing you the flexibility to take the time needed for either sick time or vacation. Fractal provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
    $150k-180k yearly 1d ago
  • AWS Data Architect

    Fractal 4.2company rating

    Santa Clara, CA jobs

    Fractal is a strategic AI partner to Fortune 500 companies with a vision to power every human decision in the enterprise. Fractal is building a world where individual choices, freedom, and diversity are the greatest assets; an ecosystem where human imagination is at the heart of every decision. Where no possibility is written off, only challenged to get better. We believe that a true Fractalite is the one who empowers imagination with intelligence. Fractal has been featured as a Great Place to Work by The Economic Times in partnership with the Great Place to Work Institute and recognized as a ‘Cool Vendor' and a ‘Vendor to Watch' by Gartner. Please visit Fractal | Intelligence for Imagination for more information about Fractal. Fractal is looking for a proactive and driven AWS Lead Data Architect/Engineer to join our cloud and data tech team. In this role, you will work on designing the system architecture and solution, ensuring the platform is scalable while performant, and creating automated data pipelines. Responsibilities: Design & Architecture of Scalable Data Platforms Design, develop, and maintain large-scale data processing architectures on the Databricks Lakehouse Platform to support business needs Architect multi-layer data models including Bronze (raw), Silver (cleansed), and Gold (curated) layers for various domains (e.g., Retail Execution, Digital Commerce, Logistics, Category Management). Leverage Delta Lake, Unity Catalog, and advanced features of Databricks for governed data sharing, versioning, and reproducibility. Client & Business Stakeholder Engagement Partner with business stakeholders to translate functional requirements into scalable technical solutions. Conduct architecture workshops and solutioning sessions with enterprise IT and business teams to define data-driven use cases Data Pipeline Development & Collaboration Collaborate with data engineers and data scientists to develop end-to-end pipelines using Python, PySpark, SQL Enable data ingestion from diverse sources such as ERP (SAP), POS data, Syndicated Data, CRM, e-commerce platforms, and third-party datasets. Performance, Scalability, and Reliability Optimize Spark jobs for performance tuning, cost efficiency, and scalability by configuring appropriate cluster sizing, caching, and query optimization techniques. Implement monitoring and alerting using Databricks Observability, Ganglia, Cloud-native tools Security, Compliance & Governance Design secure architectures using Unity Catalog, role-based access control (RBAC), encryption, token-based access, and data lineage tools to meet compliance policies. Establish data governance practices including Data Fitness Index, Quality Scores, SLA Monitoring, and Metadata Cataloging. Adoption of AI Copilots & Agentic Development Utilize GitHub Copilot, Databricks Assistant, and other AI code agents for Writing PySpark, SQL, and Python code snippets for data engineering and ML tasks. Generating documentation and test cases to accelerate pipeline development. Interactive debugging and iterative code optimization within notebooks. Advocate for agentic AI workflows that use specialized agents for Data profiling and schema inference. Automated testing and validation. Innovation and Continuous Learning Stay abreast of emerging trends in Lakehouse architectures, Generative AI, and cloud-native tooling. Evaluate and pilot new features from Databricks releases and partner integrations for modern data stack improvements. Requirements: Bachelor's or master's degree in computer science, Information Technology, or a related field. 8-12 years of hands-on experience in data engineering, with at least 5+ years on Python and Apache Spark. Expertise in building high-throughput, low-latency ETL/ELT pipelines on AWS/Azure/GCP using Python, PySpark, SQL. Excellent hands on experience with workload automation tools such as Airflow, Prefect etc. Familiarity with building dynamic ingestion frameworks from structured/unstructured data sources including APIs, flat files, RDBMS, and cloud storage Experience designing Lakehouse architectures with bronze, silver, gold layering. Strong understanding of data modelling concepts, star/snowflake schemas, dimensional modelling, and modern cloud-based data warehousing. Experience with designing Data marts using Cloud data warehouses and integrating with BI tools (Power BI, Tableau, etc.). Experience CI/CD pipelines using tools such as AWS Code commit, Azure DevOps, GitHub Actions. Knowledge of infrastructure-as-code (Terraform, ARM templates) for provisioning platform resources In-depth experience with AWS Cloud services such as Glue, S3, Redshift etc. Strong understanding of data privacy, access controls, and governance best practices. Experience working with RBAC, tokenization, and data classification frameworks Excellent communication skills for stakeholder interaction, solution presentations, and team coordination. Proven experience leading or mentoring global, cross-functional teams across multiple time zones and engagements. Ability to work independently in agile or hybrid delivery models, while guiding junior engineers and ensuring solution quality Must be able to work in PST time zone. Pay: The wage range for this role takes into account the wide range of factors that are considered in making compensation decisions including but not limited to skill sets; experience and training; licensure and certifications; and other business and organizational needs. The disclosed range estimate has not been adjusted for the applicable geographic differential associated with the location at which the position may be filled. At Fractal, it is not typical for an individual to be hired at or near the top of the range for their role and compensation decisions are dependent on the facts and circumstances of each case. A reasonable estimate of the current range is: $150k - $180k. In addition, you may be eligible for a discretionary bonus for the current performance period. Benefits: As a full-time employee of the company or as an hourly employee working more than 30 hours per week, you will be eligible to participate in the health, dental, vision, life insurance, and disability plans in accordance with the plan documents, which may be amended from time to time. You will be eligible for benefits on the first day of employment with the Company. In addition, you are eligible to participate in the Company 401(k) Plan after 30 days of employment, in accordance with the applicable plan terms. The Company provides for 11 paid holidays and 12 weeks of Parental Leave. We also follow a “free time” PTO policy, allowing you the flexibility to take the time needed for either sick time or vacation. Fractal provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
    $150k-180k yearly 1d ago
  • Oracle Data Modeler

    Yoh, A Day & Zimmermann Company 4.7company rating

    Dallas, TX jobs

    Oracle Data Modeler (Erwin) 6+ month contract (W2 ONLY - NO C-C) Downtown Dallas, TX (Onsite) Primary responsibilities of the Data Modeler include designing, developing, and maintaining enterprise-grade data models that support critical business initiatives, analytics, and operational systems. The ideal candidate is proficient in industry-standard data modeling tools (with hands-on expertise in Erwin Data Modeler) and has deep experience with Oracle databases. The candidate will also translate complex business requirements into robust, scalable, and normalized data models while ensuring alignment with data governance, performance, and integration standards. Responsibilities Design and develop conceptual, logical, and physical data models using Erwin Data Modeler (required). Generate, review, and optimize DDL (Data Definition Language) scripts for database objects (tables, views, indexes, constraints, partitions, etc.). Perform forward and reverse engineering of data models from existing Oracle and SQL Server databases. Collaborate with data architects, DBAs, ETL developers, and business stakeholders to gather and refine requirements. Ensure data models adhere to normalization standards (3NF/BCNF), data integrity, and referential integrity. Support dimensional modeling (star/snowflake schemas) for data warehousing and analytics use cases. Conduct model reviews, impact analysis, and version control using Erwin or comparable tools. Participate in data governance initiatives, including metadata management, naming standards, and lineage documentation. Optimize models for performance, scalability, and maintainability across large-scale environments. Assist in database migrations, schema comparisons, and synchronization between environments (Dev/QA/Prod). Assist in optimizing existing Data Solutions Follow Oncor's Data Governance Policy and Information Classification and Protection Policy. Participate in design reviews and take guidance from the Data Architecture team members. Qualifications 3+ years of hands-on data modeling experience in enterprise environments. Expert proficiency with Erwin Data Modeler (version 9.x or higher preferred) - including subject areas, model templates, and DDL generation. Advanced SQL skills and deep understanding of Oracle (11g/12c/19c/21c). Strong command of DDL - creating and modifying tables, indexes, constraints, sequences, synonyms, and materialized views. Solid grasp of database internals: indexing strategies, partitioning, clustering, and query execution plans. Experience with data modeling best practices: normalization, denormalization, surrogate keys, slowly changing dimensions (SCD), and data vault (a plus). Familiarity with version control (e.g., Git) and model comparison/diff tools. Excellent communication skills - ability to document models clearly and present to technical and non-technical audiences. Self-Motivated, with an ability to multi-task Capable of presenting to all levels of audiences Works well in a team environment Experience with Hadoop/MongoDB a plus Estimated Min Rate: $63.00 Estimated Max Rate: $90.00 What's In It for You? We welcome you to be a part of the largest and legendary global staffing companies to meet your career aspirations. Yoh's network of client companies has been employing professionals like you for over 65 years in the U.S., UK and Canada. Join Yoh's extensive talent community that will provide you with access to Yoh's vast network of opportunities and gain access to this exclusive opportunity available to you. Benefit eligibility is in accordance with applicable laws and client requirements. Benefits include: Medical, Prescription, Dental & Vision Benefits (for employees working 20+ hours per week) Health Savings Account (HSA) (for employees working 20+ hours per week) Life & Disability Insurance (for employees working 20+ hours per week) MetLife Voluntary Benefits Employee Assistance Program (EAP) 401K Retirement Savings Plan Direct Deposit & weekly epayroll Referral Bonus Programs Certification and training opportunities Note: Any pay ranges displayed are estimations. Actual pay is determined by an applicant's experience, technical expertise, and other qualifications as listed in the job description. All qualified applicants are welcome to apply. Yoh, a Day & Zimmermann company, is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran. Visit ************************************************ to contact us if you are an individual with a disability and require accommodation in the application process. For California applicants, qualified applicants with arrest or conviction records will be considered for employment in accordance with the Los Angeles County Fair Chance Ordinance for Employers and the California Fair Chance Act. All of the material job duties described in this posting are job duties for which a criminal history may have a direct, adverse, and negative relationship potentially resulting in the withdrawal of a conditional offer of employment. It is unlawful in Massachusetts to require or administer a lie detector test as a condition of employment or continued employment. An employer who violates this law shall be subject to criminal penalties and civil liability. By applying and submitting your resume, you authorize Yoh to review and reformat your resume to meet Yoh's hiring clients' preferences. To learn more about Yoh's privacy practices, please see our Candidate Privacy Notice: **********************************
    $63 hourly 1d ago
  • AWS Data Architect

    Fractal 4.2company rating

    Fremont, CA jobs

    Fractal is a strategic AI partner to Fortune 500 companies with a vision to power every human decision in the enterprise. Fractal is building a world where individual choices, freedom, and diversity are the greatest assets; an ecosystem where human imagination is at the heart of every decision. Where no possibility is written off, only challenged to get better. We believe that a true Fractalite is the one who empowers imagination with intelligence. Fractal has been featured as a Great Place to Work by The Economic Times in partnership with the Great Place to Work Institute and recognized as a ‘Cool Vendor' and a ‘Vendor to Watch' by Gartner. Please visit Fractal | Intelligence for Imagination for more information about Fractal. Fractal is looking for a proactive and driven AWS Lead Data Architect/Engineer to join our cloud and data tech team. In this role, you will work on designing the system architecture and solution, ensuring the platform is scalable while performant, and creating automated data pipelines. Responsibilities: Design & Architecture of Scalable Data Platforms Design, develop, and maintain large-scale data processing architectures on the Databricks Lakehouse Platform to support business needs Architect multi-layer data models including Bronze (raw), Silver (cleansed), and Gold (curated) layers for various domains (e.g., Retail Execution, Digital Commerce, Logistics, Category Management). Leverage Delta Lake, Unity Catalog, and advanced features of Databricks for governed data sharing, versioning, and reproducibility. Client & Business Stakeholder Engagement Partner with business stakeholders to translate functional requirements into scalable technical solutions. Conduct architecture workshops and solutioning sessions with enterprise IT and business teams to define data-driven use cases Data Pipeline Development & Collaboration Collaborate with data engineers and data scientists to develop end-to-end pipelines using Python, PySpark, SQL Enable data ingestion from diverse sources such as ERP (SAP), POS data, Syndicated Data, CRM, e-commerce platforms, and third-party datasets. Performance, Scalability, and Reliability Optimize Spark jobs for performance tuning, cost efficiency, and scalability by configuring appropriate cluster sizing, caching, and query optimization techniques. Implement monitoring and alerting using Databricks Observability, Ganglia, Cloud-native tools Security, Compliance & Governance Design secure architectures using Unity Catalog, role-based access control (RBAC), encryption, token-based access, and data lineage tools to meet compliance policies. Establish data governance practices including Data Fitness Index, Quality Scores, SLA Monitoring, and Metadata Cataloging. Adoption of AI Copilots & Agentic Development Utilize GitHub Copilot, Databricks Assistant, and other AI code agents for Writing PySpark, SQL, and Python code snippets for data engineering and ML tasks. Generating documentation and test cases to accelerate pipeline development. Interactive debugging and iterative code optimization within notebooks. Advocate for agentic AI workflows that use specialized agents for Data profiling and schema inference. Automated testing and validation. Innovation and Continuous Learning Stay abreast of emerging trends in Lakehouse architectures, Generative AI, and cloud-native tooling. Evaluate and pilot new features from Databricks releases and partner integrations for modern data stack improvements. Requirements: Bachelor's or master's degree in computer science, Information Technology, or a related field. 8-12 years of hands-on experience in data engineering, with at least 5+ years on Python and Apache Spark. Expertise in building high-throughput, low-latency ETL/ELT pipelines on AWS/Azure/GCP using Python, PySpark, SQL. Excellent hands on experience with workload automation tools such as Airflow, Prefect etc. Familiarity with building dynamic ingestion frameworks from structured/unstructured data sources including APIs, flat files, RDBMS, and cloud storage Experience designing Lakehouse architectures with bronze, silver, gold layering. Strong understanding of data modelling concepts, star/snowflake schemas, dimensional modelling, and modern cloud-based data warehousing. Experience with designing Data marts using Cloud data warehouses and integrating with BI tools (Power BI, Tableau, etc.). Experience CI/CD pipelines using tools such as AWS Code commit, Azure DevOps, GitHub Actions. Knowledge of infrastructure-as-code (Terraform, ARM templates) for provisioning platform resources In-depth experience with AWS Cloud services such as Glue, S3, Redshift etc. Strong understanding of data privacy, access controls, and governance best practices. Experience working with RBAC, tokenization, and data classification frameworks Excellent communication skills for stakeholder interaction, solution presentations, and team coordination. Proven experience leading or mentoring global, cross-functional teams across multiple time zones and engagements. Ability to work independently in agile or hybrid delivery models, while guiding junior engineers and ensuring solution quality Must be able to work in PST time zone. Pay: The wage range for this role takes into account the wide range of factors that are considered in making compensation decisions including but not limited to skill sets; experience and training; licensure and certifications; and other business and organizational needs. The disclosed range estimate has not been adjusted for the applicable geographic differential associated with the location at which the position may be filled. At Fractal, it is not typical for an individual to be hired at or near the top of the range for their role and compensation decisions are dependent on the facts and circumstances of each case. A reasonable estimate of the current range is: $150k - $180k. In addition, you may be eligible for a discretionary bonus for the current performance period. Benefits: As a full-time employee of the company or as an hourly employee working more than 30 hours per week, you will be eligible to participate in the health, dental, vision, life insurance, and disability plans in accordance with the plan documents, which may be amended from time to time. You will be eligible for benefits on the first day of employment with the Company. In addition, you are eligible to participate in the Company 401(k) Plan after 30 days of employment, in accordance with the applicable plan terms. The Company provides for 11 paid holidays and 12 weeks of Parental Leave. We also follow a “free time” PTO policy, allowing you the flexibility to take the time needed for either sick time or vacation. Fractal provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
    $150k-180k yearly 2d ago
  • Data Architect

    Optech 4.6company rating

    Cincinnati, OH jobs

    THIS IS A W2 (NOT C2C OR REFERRAL BASED) CONTRACT OPPORTUNITY REMOTE MOSTLY WITH 1 DAY/MO ONSITE IN CINCINNATI-LOCAL CANDIDATES TAKE PREFERENCE RATE: $75-85/HR WITH BENEFITS We are seeking a highly skilled Data Architect to function in a consulting capacity to analyze, redesign, and optimize a Medical Payments client's environment. The ideal candidate will have deep expertise in SQL, Azure cloud services, and modern data architecture principles. Responsibilities Design and maintain scalable, secure, and high-performing data architectures. Lead migration and modernization projects in heavy use production systems. Develop and optimize data models, schemas, and integration strategies. Implement data governance, security, and compliance standards. Collaborate with business stakeholders to translate requirements into technical solutions. Ensure data quality, consistency, and accessibility across systems. Required Qualifications Bachelor's degree in Computer Science, Information Systems, or related field. Proven experience as a Data Architect or similar role. Strong proficiency in SQL (query optimization, stored procedures, indexing). Hands-on experience with Azure cloud services for data management and analytics. Knowledge of data modeling, ETL processes, and data warehousing concepts. Familiarity with security best practices and compliance frameworks. Preferred Skills Understanding of Electronic Health Records systems. Understanding of Big Data technologies and modern data platforms outside the scope of this project.
    $75-85 hourly 5d ago
  • Data Architect

    Mindlance 4.6company rating

    Washington, DC jobs

    Job Title: Developer Premium I Duration: 7 Months with long term extension Hybrid Onsite: 4 days per week from Day 1, with a full transition to 100% onsite anticipated soon Job Requirement: Strong expertise in Data Architecture & Date model design. MS Azure (core experiment) Experience with SAP ECC preferred SAFE agile certification is a plus Ability to work flexibility including off hours to support critical IT task & migration activities. Educational Qualifications and Experience: Bachelor's degree in Computer Science, Information Systems or in a related area of expertise. Required number of years of proven experience in the specific technology/toolset as per Experience Matrix below for each Level. Essential Job Functions: Take functional specs and produce high quality technical specs Take technical specs and produce completed and well tested programs which meet user satisfaction and acceptance, and precisely reflect the requirements - business logic, performance, and usability requirements Conduct/attend requirements definition meetings with end-users and document system/business requirements Conduct Peer Review on Code and Test Cases, prepared by other team members, to assess quality and compliance with coding standards As required for the role, perform end-user demos of proposed solution and finished product, provide end user training and provide support for user acceptance testing As required for the role, troubleshoot production support issues and find appropriate solutions within defined SLA to ensure minimal disruption to business operations Ensure that Bank policies, procedures, and standards are factored into project design and development As required for the role, install new release, and participate in upgrade activities As required for the role, perform integration between systems that are on prem and also on the cloud and third-party vendors As required for the role, collaborate with different teams within the organization for infrastructure, integration, database administration support Adhere to project schedules and report progress regularly Prepare weekly status reports and participate in status meetings and highlight issues and constraints that would impact timely delivery of work program items Find the appropriate tools to implement the project Maintain knowledge of current industry standards and practices As needed, interact and collaborate with Enterprise Architects (EA), Office of Information Security (OIS) to obtain approvals and accreditations “Mindlance is an Equal Opportunity Employer and does not discriminate in employment on the basis of - Minority/Gender/Disability/Religion/LGBTQI/Age/Veterans.”
    $93k-124k yearly est. 4d ago
  • GIS Data Analyst

    Visionaire Partners 4.1company rating

    Atlanta, GA jobs

    Tremendous opportunity for a GIS Analyst to join a stable company making big strides in their industry. This is focused on the engineering side of the business! You will be able to gain experience in this critical technical role in the creation and support of critical feature data. You will answer ongoing questions and inquiries, generate new geospatial data, and provide flexible GIS service. Regular inventory analysis showing counts, mileage, and measurements of the system is anticipated. You will also directly perform geospatial data preparation and assemblage tasks.This technical role requires that you have a broad geographic understanding, the ability to correctly interpret geometrics and feature placement information received from the field, and a general understanding of the installation and maintenance activities undertaken by the Engineering Department sub-groups. Responsibilities: Perform and be responsible for the creation, modification, and quality control of essential geospatial infrastructure data for company wide use for a variety of critical applications, including the analysis and assembly of geospatial data utilized to assist PTC compliance. Utilize their practical knowledge to ensure field generated spatial data from various sources is properly converted into a functioning theoretical digital network compliant with the database and data model standards established for systems. Authorized to perform quality control checks, accept, and promote geospatial data change sets that are produced by the Geospatial Data Group's CADD technicians. Utilize a variety of GIS software tools to perform topology corrections and make geometric modification as part of the data quality review and acceptance process. Called upon to work with other involved groups and departments in a collaborative manner to fully utilize the technical skill sets of the Geospatial Data Group toward the Enterprise GIS goals. Assist the Sr. Geospatial Data Analyst with assorted GIS responsibilities as required per business needs. Leverage the company's investment in geospatial technology to generate value and identify cost savings using GIS technology. This is a 12 month contract position working out of our office in Midtown Atlanta 4 Days a week with 1 day remote. Our new office is state of the art with many amenities (gym, coffee shop, cafeteria, etc.) and paid parking. This is an excellent opportunity to work within an enterprise environment and an outstanding work-life balance. REQUIRED: 3+ years experience working with Data in a GIS environment Strong Communication skills, presenting and working across org. departments Experience with ESRI Suite (ArcGIS) Experience with data editing and spatial analysis Bachelor degree in GIS or similar (computer science, SW engineering, IT, etc.) PREFERRED: TFS Esri JavaScript API ArcGIS online and ArcGIS pro Experience with relational databases Experience with database queries (basic queries) SQL Must be authorized to work in the U.S./ Sponsorships are not available
    $54k-79k yearly est. 3d ago
  • Data Architect

    KPI Partners 4.8company rating

    Plano, TX jobs

    KPI Partners is a 5 times Gartner-recognized data, analytics, and AI consulting company. We are leaders in data engineering on Azure, AWS, Google, Snowflake, and Databricks. Founded in 2006, KPI has over 400 consultants and has successfully delivered over 1,000 projects to our clients. We are looking for skilled data engineers who want to work with the best team in data engineering. Title: Senior Data Architect Location: Plano, TX (Hybrid) Job Type: Contract - 6 Months Key Skills: SQL, PySpark, Databricks, and Azure Cloud Key Note: Looking for a Data Architect who is Hands-on with SQL, PySpark, Databricks, and Azure Cloud. About the Role: We are seeking a highly skilled and experienced Senior Data Architect to join our dynamic team at KPI, working on challenging and multi-year data transformation projects. This is an excellent opportunity for a talented data engineer to play a key role in building innovative data solutions using Azure Native Services and related technologies. If you are passionate about working with large-scale data systems and enjoy solving complex engineering problems, this role is for you. Key Responsibilities: Data Engineering: Design, development, and implementation of data pipelines and solutions using PySpark, SQL, and related technologies. Collaboration: Work closely with cross-functional teams to understand business requirements and translate them into robust data solutions. Data Warehousing: Design and implement data warehousing solutions, ensuring scalability, performance, and reliability. Continuous Learning: Stay up to date with modern technologies and trends in data engineering and apply them to improve our data platform. Mentorship: Provide guidance and mentorship to junior data engineers, ensuring best practices in coding, design, and development. Must-Have Skills & Qualifications: Minimum 12+ years of overall experience in IT Industry. 4+ years of experience in data engineering, with a strong background in building large-scale data solutions. 4+ years of hands-on experience developing and implementing data pipelines using Azure stack experience (Azure, ADF, Databricks, Functions) Proven expertise in SQL for querying, manipulating, and analyzing large datasets. Strong knowledge of ETL processes and data warehousing fundamentals. Self-motivated and independent, with a “let's get this done” mindset and the ability to thrive in a fast-paced and dynamic environment. Good-to-Have Skills: Databricks Certification is a plus. Data Modeling, Azure Architect Certification.
    $88k-123k yearly est. 1d ago
  • Lead Data Architect

    Fractal 4.2company rating

    San Jose, CA jobs

    Fractal is looking for a proactive and driven AWS Lead Data Architect/Engineer to join our cloud and data tech team. In this role, you will work on designing the system architecture and solution, ensuring the platform is scalable while performant, and creating automated data pipelines. Responsibilities: Design & Architecture of Scalable Data Platforms Design, develop, and maintain large-scale data processing architectures on the Databricks Lakehouse Platform to support business needs Architect multi-layer data models including Bronze (raw), Silver (cleansed), and Gold (curated) layers for various domains (e.g., Retail Execution, Digital Commerce, Logistics, Category Management). Leverage Delta Lake, Unity Catalog, and advanced features of Databricks for governed data sharing, versioning, and reproducibility. Client & Business Stakeholder Engagement Partner with business stakeholders to translate functional requirements into scalable technical solutions. Conduct architecture workshops and solutioning sessions with enterprise IT and business teams to define data-driven use cases Data Pipeline Development & Collaboration Collaborate with data engineers and data scientists to develop end-to-end pipelines using Python, PySpark, SQL Enable data ingestion from diverse sources such as ERP (SAP), POS data, Syndicated Data, CRM, e-commerce platforms, and third-party datasets. Performance, Scalability, and Reliability Optimize Spark jobs for performance tuning, cost efficiency, and scalability by configuring appropriate cluster sizing, caching, and query optimization techniques. Implement monitoring and alerting using Databricks Observability, Ganglia, Cloud-native tools Security, Compliance & Governance Design secure architectures using Unity Catalog, role-based access control (RBAC), encryption, token-based access, and data lineage tools to meet compliance policies. Establish data governance practices including Data Fitness Index, Quality Scores, SLA Monitoring, and Metadata Cataloging. Adoption of AI Copilots & Agentic Development Utilize GitHub Copilot, Databricks Assistant, and other AI code agents for Writing PySpark, SQL, and Python code snippets for data engineering and ML tasks. Generating documentation and test cases to accelerate pipeline development. Interactive debugging and iterative code optimization within notebooks. Advocate for agentic AI workflows that use specialized agents for Data profiling and schema inference. Automated testing and validation. Innovation and Continuous Learning Stay abreast of emerging trends in Lakehouse architectures, Generative AI, and cloud-native tooling. Evaluate and pilot new features from Databricks releases and partner integrations for modern data stack improvements. Requirements: Bachelor's or master's degree in computer science, Information Technology, or a related field. 8-12 years of hands-on experience in data engineering, with at least 5+ years on Python and Apache Spark. Expertise in building high-throughput, low-latency ETL/ELT pipelines on AWS/Azure/GCP using Python, PySpark, SQL. Excellent hands on experience with workload automation tools such as Airflow, Prefect etc. Familiarity with building dynamic ingestion frameworks from structured/unstructured data sources including APIs, flat files, RDBMS, and cloud storage Experience designing Lakehouse architectures with bronze, silver, gold layering. Strong understanding of data modelling concepts, star/snowflake schemas, dimensional modelling, and modern cloud-based data warehousing. Experience with designing Data marts using Cloud data warehouses and integrating with BI tools (Power BI, Tableau, etc.). Experience CI/CD pipelines using tools such as AWS Code commit, Azure DevOps, GitHub Actions. Knowledge of infrastructure-as-code (Terraform, ARM templates) for provisioning platform resources In-depth experience with AWS Cloud services such as Glue, S3, Redshift etc. Strong understanding of data privacy, access controls, and governance best practices. Experience working with RBAC, tokenization, and data classification frameworks Excellent communication skills for stakeholder interaction, solution presentations, and team coordination. Proven experience leading or mentoring global, cross-functional teams across multiple time zones and engagements. Ability to work independently in agile or hybrid delivery models, while guiding junior engineers and ensuring solution quality Must be able to work in PST time zone.
    $113k-151k yearly est. 1d ago
  • Lead Data Architect

    Interactive Resources-IR 4.2company rating

    Tempe, AZ jobs

    We are seeking a Lead Data Architect to drive the design and implementation of our enterprise data architecture with a focus on Azure Data Lake, Databricks, and Lakehouse architecture. This role will serve as the data design authority, ensuring alignment with enterprise standards while enabling business value through scalable, high-quality data solutions. The ideal candidate will have a proven track record in financial services or wealth management, deep expertise in data modeling and MDM (e.g., Profisee), and experience architecting cloud-native data platforms that support analytics, AI/ML, and regulatory/compliance requirements. Key Responsibilities Define and own the enterprise data architecture strategy, standards, and patterns. Lead the design and implementation of Azure-based Lakehouse architecture leveraging Azure Data Lake, Databricks, Delta Lake, and related services. Serve as the data design authority, governing data models, integration patterns, metadata management, and data quality standards. Architect and implement Master Data Management (MDM) solutions, preferably with Profisee. Collaborate with stakeholders, engineers, and analysts to translate business requirements into scalable architecture and data models. Ensure alignment with data governance, security, and compliance frameworks. Provide technical leadership in data design, ETL/ELT best practices, and performance optimization. Partner with enterprise and solution architects to integrate data architecture with application and cloud strategies. Mentor and guide data engineers and modelers, fostering a culture of engineering and architecture excellence. Required Qualifications 10+ years of experience in data architecture, data engineering, or related fields, with 5+ years in a lead/architect capacity. Strong expertise in Azure Data Lake, Databricks, Delta Lake, and Lakehouse architecture. Hands-on experience architecting and implementing MDM solutions (Profisee strongly preferred). Deep knowledge of data modeling (conceptual, logical, physical) and metadata management. Experience as a data design authority across enterprise programs. Strong understanding of financial services data domains (clients, accounts, portfolios, products, transactions) and regulatory needs. Proficiency in SQL, Python, Spark, and modern ELT/ETL tools. Familiarity with data governance, lineage, cataloging, and data quality tools. Excellent communication and leadership skills to engage with senior business and technology stakeholders. Preferred Qualifications Experience with real-time data streaming (Kafka, Event Hub). Exposure to BI/Analytics platforms (Power BI, Tableau) integrated with Lakehouse. Knowledge of data security and privacy frameworks in financial services. Cloud certification in Microsoft Azure Data Engineering/Architecture. Benefits Comprehensive health, vision, and dental coverage. 401(k) plans plus a variety of voluntary plans such as legal services, insurance, and more. 👉 If you're a data architecture leader who thrives on building scalable, cloud-native data platforms and want to make an impact in financial services, we'd love to connect.
    $90k-118k yearly est. 1d ago
  • System Integration Architect

    Mindlance 4.6company rating

    Chicago, IL jobs

    Client : Airlines/Aerospace/Aviation Title : Workday Integration Architect/System Integration Architect/Integration Architect/System Architect Duration : 12 Months : Top 3 skill sets required for this role: 1. Service Now HR vertical technical and functional skills 2. Workday and third-party integrations to Service Now 3. Communication and Stakeholder management Nice to have skills or certifications: 1. Experience with AI/SNOW Virtual Agents 2. Employee portal design and development 3. Conflict Management Job Description: 2 phases - service now to work day migration -ideally complete by 2026 - Deep technical skills with Service now and Workday - Strong communication - they are in HR space- stakeholder management and explain concepts to people - Team environment- but they need to drive the decisions - - Day-to-day- development skills - Purpose of role - new project - augment the architect skills - system implementation partners for integrations - cloud background and 2 platform backgrounds - mid range or senior - who has done this before - someone who can advise what to look out for - they will be working on new employee portal - will continue to advance (phase 2). Service now in the HR vertical, integrations, AI and search (how to maximize), communication skills, time management, ok - without Workday but other migrations. Mindlance is an Equal Opportunity Employer and does not discriminate in employment on the basis of - Minority/Gender/Disability/Religion/LGBTQI/Age/Veterans.
    $87k-120k yearly est. 1d ago
  • Senior Solutions Architect

    Firstpro, Inc. 4.5company rating

    Exton, PA jobs

    The Senior Solution Architect, Digital & Transformation (D&T) - SAP Production Planning partners closely with Production and Operations to define, deliver, and support technology-enabled business solutions. This role establishes strategic direction for Production Planning, leads solution design and delivery within SAP S/4HANA, and manages team members supporting these functions. The architect holds primary accountability for Production Planning solutions and their integrations with Quality, ensuring successful project delivery, system enhancements, and production support aligned with corporate policies, regulatory requirements, and D&T standards. This is a 12-month hybrid contract requiring 3 days onsite per week in Exton, PA. Responsibilities Collaborate with Production, Operations, and Supply Chain to capture and translate business needs into technical requirements and optimal SAP solutions. Lead the design, delivery, and support of SAP S/4HANA Production Planning functionality, including integration with Quality (QM), Materials Management (MM), and Extended Warehouse Management (EWM). Provide hands-on technical delivery of new functionality, enhancements, and production support. Solve complex problems using broad functional and technical expertise. Manage team activities, including workload distribution, performance evaluation, cross-functional coordination, and team development. Mentor and coach team members to ensure successful task execution and ongoing growth. Support and contribute to Lean Sigma initiatives and continuous improvement programs. Ensure compliance with company safety and quality policies, S/4HANA Service Delivery standards, Sarbanes-Oxley requirements, and FDA GMP guidelines. Create and maintain lifecycle documentation, including SOPs, SOIs, and Job Aids. Participate in and follow established Change Control processes. Maintain up-to-date knowledge of the latest SAP S/4HANA releases and Manufacturing/Shop Floor interfaces (MES). Effectively prioritize and execute tasks in a global, virtual, and fast-paced environment. Travel locally as needed (up to 10%). Requirements Education & Experience Bachelor's degree in Computer Science, Information Systems, or related field (or equivalent experience). Minimum of 8 years of relevant work experience. Strong people management experience. Technical Skills Deep expertise in SAP S/4HANA Production Planning (PP) with strong knowledge of related modules (QM, MM, EWM). Experience integrating SAP S/4HANA across Production and Quality functions. Strong understanding of Manufacturing and Shop Floor systems (MES). Ability to create and maintain system lifecycle and compliance documentation. Proficient in English, with strong communication and collaboration skills. Ability to work independently in a fast-paced, complex environment with strong prioritization and decision-making abilities. Preferred Qualifications Agile & Scrum certification. ITIL certification. Additional Details Hybrid schedule: minimum 3 days per week onsite. Local travel up to 10% (approximately 26 business days per year).
    $112k-152k yearly est. 1d ago
  • Senior Solutions Architect

    Firstpro, Inc. 4.5company rating

    Exton, PA jobs

    The Senior Solution Architect, D&T - Plan to Produce partners with Quality and Operations to define, design, and deliver technology-enabled solutions across the Plan-to-Make workstream. This role is primarily responsible for SAP Quality Management (QM) processes and integrations with LIMS and MES systems, ensuring best-in-class support and alignment with company policies, regulatory requirements, and D&T standards. The architect will drive strategic direction, manage support strategy and SLAs, and enable productive, high-quality delivery across SAP and MES environments. This is a 12-month hybrid contract requiring 3 days onsite per week in Exton, PA. Responsibilities Collaborate with Quality, Operations, and Warehouse functions to understand business needs and translate them into detailed SAP and D&T technical requirements. Lead strategic planning, solution design, and technical delivery for SAP Quality Management and Supply Chain capabilities. Manage support operations including ticket handling, master data maintenance, release management, and SLA performance. Provide hands-on delivery of SAP QM functionality and enhancements to meet defined business requirements. Solve complex issues by applying deep functional and technical expertise within the ERP Services Delivery model and regulatory frameworks (SOX, FDA GMP). Create and maintain project artifacts including project charters, plans, budgets, and capital requests in accordance with Project and Portfolio Management processes. Develop and maintain system lifecycle documentation such as SOPs, SOIs, and Job Aids in compliance with corporate policies. Participate in and adhere to the Change Control process. Support Lean Sigma initiatives and continuous improvement programs. Ensure consistent, reliable attendance; comply with all safety regulations, procedures, and company policies. Collaborate with local and global teams while prioritizing and executing work effectively in a high-pressure, fast-paced environment. Travel globally as needed (up to 10%). Requirements Education & Experience Bachelor's degree in Computer Science, Information Systems, Engineering, Business Management, Supply Chain, or related field (or equivalent experience). Minimum of 8 years of relevant work experience. Experience managing support teams and driving delivery in global environments. Technical Skills In-depth expertise in SAP Quality Management (QM) with strong understanding of integrations to SAP PP/PI in SAP ERP. Experience with at least two full end-to-end SAP ERP implementations; exposure to SAP S/4HANA is a plus. Deep understanding of SAP QM configurations and functional areas including: Quality Planning Inbound, In-Process, Recurring, and Outbound Inspections Quality Notifications Batch Management, Traceability, Recalls Workflow Management Non-Conformance Processes Multiple Specifications Physical Samples & Stability Analysis Result Recording QM Reporting Experience with MES and LIMS system integrations is strongly preferred. Skilled in managing SLAs, performing regression testing, and coordinating master data cleansing. Ability to guide and mentor team members; may direct work of others. Soft Skills & Additional Abilities Strong communication, prioritization, and problem-solving skills. Ability to work independently with sound judgment and urgency. Comfortable in virtual, global, and high-pressure work environments. Commitment to adhering to SOPs, quality standards, and safety requirements. Preferred Qualifications Agile & Scrum certification ITIL certification Additional Details Hybrid role requiring at least 3 days onsite. Global travel up to 10% (approx. 26 business days per year).
    $112k-152k yearly est. 1d ago

Learn more about Cherry Bekaert jobs