Post job

Data Architect jobs at Link Technologies

- 3553 jobs
  • Data Analyst

    Guidehouse 3.7company rating

    Bethesda, MD jobs

    Job Family: Data Science Consulting Travel Required: None Clearance Required: Active Top Secret SCI with Polygraph What You Will Do: Work with a senior leader to apply data analytics principles to transform raw data into actionable insights, incorporating emerging trends and available initiatives, to inform financial management and budgetary strategy for a Federal C-suite client. Deliver innovative processes to integrate disparate data utilizing tools such as Tableau, Microsoft BI, and/or Qlik. What You Will Need: An ACTIVE and MAINTAINED TS/SCI Federal or DoD Security Clearance with a COUNTERINTELLIGENCE (CI) polygraph Bachelor's degree in Data Science, Computer Science, Management Information Systems, Systems Engineering, Information Technology, or relevant degree program Minimum of FIVE (5) years of experience in information technology, systems, and/or data analytics in the Federal government Experience in SQL and Python What Would Be Nice To Have: Prior experience with cloud-based applications and data sources Experience with data visualization tools such as Tableau, Microsoft BI, and/or Qlik The annual salary range for this position is $113,000.00-$188,000.00. Compensation decisions depend on a wide range of factors, including but not limited to skill sets, experience and training, security clearances, licensure and certifications, and other business and organizational needs. What We Offer: Guidehouse offers a comprehensive, total rewards package that includes competitive compensation and a flexible benefits package that reflects our commitment to creating a diverse and supportive workplace. Benefits include: Medical, Rx, Dental & Vision Insurance Personal and Family Sick Time & Company Paid Holidays Position may be eligible for a discretionary variable incentive bonus Parental Leave and Adoption Assistance 401(k) Retirement Plan Basic Life & Supplemental Life Health Savings Account, Dental/Vision & Dependent Care Flexible Spending Accounts Short-Term & Long-Term Disability Student Loan PayDown Tuition Reimbursement, Personal Development & Learning Opportunities Skills Development & Certifications Employee Referral Program Corporate Sponsored Events & Community Outreach Emergency Back-Up Childcare Program Mobility Stipend About Guidehouse Guidehouse is an Equal Opportunity Employer-Protected Veterans, Individuals with Disabilities or any other basis protected by law, ordinance, or regulation. Guidehouse will consider for employment qualified applicants with criminal histories in a manner consistent with the requirements of applicable law or ordinance including the Fair Chance Ordinance of Los Angeles and San Francisco. If you have visited our website for information about employment opportunities, or to apply for a position, and you require an accommodation, please contact Guidehouse Recruiting at ************** or via email at RecruitingAccommodation@guidehouse.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodation. All communication regarding recruitment for a Guidehouse position will be sent from Guidehouse email domains including @guidehouse.com or ************************. Correspondence received by an applicant from any other domain should be considered unauthorized and will not be honored by Guidehouse. Note that Guidehouse will never charge a fee or require a money transfer at any stage of the recruitment process and does not collect fees from educational institutions for participation in a recruitment event. Never provide your banking information to a third party purporting to need that information to proceed in the hiring process. If any person or organization demands money related to a job opportunity with Guidehouse, please report the matter to Guidehouse's Ethics Hotline. If you want to check the validity of correspondence you have received, please contact *************************. Guidehouse is not responsible for losses incurred (monetary or otherwise) from an applicant's dealings with unauthorized third parties. Guidehouse does not accept unsolicited resumes through or from search firms or staffing agencies. All unsolicited resumes will be considered the property of Guidehouse and Guidehouse will not be obligated to pay a placement fee.
    $113k-188k yearly Auto-Apply 22h ago
  • Data Analyst PT

    Guidehouse 3.7company rating

    San Antonio, TX jobs

    Job Family: Data Science Consulting Travel Required: Up to 25% Clearance Required: Ability to Obtain Public Trust What You Will Do: Guidehouse is seeking a highly skilled and detail-oriented Data Analyst to support a client in implementing and maintaining a portfolio of data reports, dashboards, and business intelligence tools. Duties will include: Working with clients to identify reporting requirements, develop dashboards and visualizations, automate business processes, and build predictive models to inform decision-making for critical initiatives. Uncovering hidden insights from data and effectively communicating findings to stakeholders in ways that are consumable and engaging. Aggregating, cleaning, and transforming data to support dashboards and visualizations, as well as coordinating data needs and report parameters with customers. Tracking and reporting the status of report requests to ensure timely delivery and alignment with organizational goals. What You Will Need: Must be able to OBTAIN and MAINTAIN a Federal or DoD "PUBLIC TRUST"; candidates must obtain approved adjudication of their PUBLIC TRUST prior to onboarding with Guidehouse. Candidates with an ACTIVE PUBLIC TRUST or SUITABILITY are preferred. Bachelor's degree ONE (1) year of data analytics experience What Would Be Nice To Have: Excellent verbal, written and presentation skills, with demonstrated ability to translate technical information to a non-technical audience at all levels of the organization Strong interpersonal skills, with the ability to work collaboratively and build and maintain effective working relationships with all stakeholders Strong attention to detail, thoroughness, quality, & customer service orientation M.S./M.A. in a relevant quantitative discipline such as data science, statistics, mathematics, computer science, or economics. Experience with Air Force systems and platforms (Advana, Envision, Blade, Vault, etc.) Experience with data visualization tools including Tableau, Python, SQL, and UIPath Strong consulting skills, including identifying and addressing client needs, building relationships, and driving initiatives forward Knowledge of data security, permissions management, and automation workflows Experience working in a government or military environment Proficiency in using Microsoft Graph API for advanced data integration and automation tasks Experience with MS Platform including Power BI, Automate and MS Suite What We Offer: Guidehouse offers a comprehensive, total rewards package that includes competitive compensation and a flexible benefits package that reflects our commitment to creating a diverse and supportive workplace. Benefits include: Medical, Rx, Dental & Vision Insurance Personal and Family Sick Time & Company Paid Holidays Position may be eligible for a discretionary variable incentive bonus Parental Leave and Adoption Assistance 401(k) Retirement Plan Basic Life & Supplemental Life Health Savings Account, Dental/Vision & Dependent Care Flexible Spending Accounts Short-Term & Long-Term Disability Student Loan PayDown Tuition Reimbursement, Personal Development & Learning Opportunities Skills Development & Certifications Employee Referral Program Corporate Sponsored Events & Community Outreach Emergency Back-Up Childcare Program Mobility Stipend About Guidehouse Guidehouse is an Equal Opportunity Employer-Protected Veterans, Individuals with Disabilities or any other basis protected by law, ordinance, or regulation. Guidehouse will consider for employment qualified applicants with criminal histories in a manner consistent with the requirements of applicable law or ordinance including the Fair Chance Ordinance of Los Angeles and San Francisco. If you have visited our website for information about employment opportunities, or to apply for a position, and you require an accommodation, please contact Guidehouse Recruiting at ************** or via email at RecruitingAccommodation@guidehouse.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodation. All communication regarding recruitment for a Guidehouse position will be sent from Guidehouse email domains including @guidehouse.com or ************************. Correspondence received by an applicant from any other domain should be considered unauthorized and will not be honored by Guidehouse. Note that Guidehouse will never charge a fee or require a money transfer at any stage of the recruitment process and does not collect fees from educational institutions for participation in a recruitment event. Never provide your banking information to a third party purporting to need that information to proceed in the hiring process. If any person or organization demands money related to a job opportunity with Guidehouse, please report the matter to Guidehouse's Ethics Hotline. If you want to check the validity of correspondence you have received, please contact *************************. Guidehouse is not responsible for losses incurred (monetary or otherwise) from an applicant's dealings with unauthorized third parties. Guidehouse does not accept unsolicited resumes through or from search firms or staffing agencies. All unsolicited resumes will be considered the property of Guidehouse and Guidehouse will not be obligated to pay a placement fee.
    $65k-83k yearly est. Auto-Apply 22h ago
  • Data Warehouse Architect - AI & Data Modernization

    Guidehouse 3.7company rating

    San Antonio, TX jobs

    Job Family: Data Science Consulting Travel Required: Up to 25% Clearance Required: Ability to Obtain Public Trust What You Will Do: We are seeking an experienced Data Warehouse Architect to join our growing AI and Data practice, with a dedicated focus on the Federal Civilian Agencies (FCA) market within the Communities, Energy & Infrastructure (CEI) segment. This individual will be a hands-on technical leader, responsible for designing and delivering modern data warehouse solutions that enable federal clients to achieve mission outcomes, operational efficiency, and digital transformation. This is a strategic delivery role for someone who thrives at the intersection of data architecture, cloud platforms, and public sector modernization. The Data Warehouse Architect will collaborate with cross-functional teams and client stakeholders to modernize legacy environments, implement scalable data pipelines, and support advanced analytics initiatives. Client Leadership & Delivery Serve as a trusted technical advisor to FCA clients on data architecture and modernization strategy. Lead engagements from assessment through implementation, ensuring delivery excellence and measurable outcomes. Translate complex business and technical requirements into actionable data warehouse solutions using platforms such as Databricks, Tableau, and Azure/AWS/GCP. Solution Development & Innovation Architect and implement scalable data pipelines and models using Databricks and other modern platforms. Rationalize legacy BI environments and design future-state architectures aligned with client goals. Ensure compliance with federal data governance, security, and performance standards. Practice & Team Leadership Mentor and guide multidisciplinary teams including data engineers, analysts, and consultants. Support recruiting, onboarding, and talent development within the AI & Data practice. Foster a culture of innovation, collaboration, and continuous learning. What You Will Need: US Citizenship is required Bachelor's degree is required. Minimum SEVEN (7) years of experience in data architecture, data engineering, and analytics. Minimum FIVE (5) years of experience delivering data-driven transformation programs. Strong understanding of data platforms including Databricks, Tableau, and cloud environments (Azure, AWS, GCP). Demonstrated experience supporting the business development lifecycle, including capture and proposal activities. Proven track record of leading large-scale data modernization efforts from concept to execution. Ability to collaborate across technical and business audiences, from C-suite to engineering teams. Excellent communication, facilitation, and relationship-building skills. What Would Be Nice To Have: Master's Degree. Certifications in AI/LLM or cloud data platforms. Experience working with FCA clients such as DOT, GSA, USDA, or similar. Familiarity with federal contracting and procurement processes. The annual salary range for this position is $149,000.00-$248,000.00. Compensation decisions depend on a wide range of factors, including but not limited to skill sets, experience and training, security clearances, licensure and certifications, and other business and organizational needs. What We Offer: Guidehouse offers a comprehensive, total rewards package that includes competitive compensation and a flexible benefits package that reflects our commitment to creating a diverse and supportive workplace. Benefits include: Medical, Rx, Dental & Vision Insurance Personal and Family Sick Time & Company Paid Holidays Position may be eligible for a discretionary variable incentive bonus Parental Leave and Adoption Assistance 401(k) Retirement Plan Basic Life & Supplemental Life Health Savings Account, Dental/Vision & Dependent Care Flexible Spending Accounts Short-Term & Long-Term Disability Student Loan PayDown Tuition Reimbursement, Personal Development & Learning Opportunities Skills Development & Certifications Employee Referral Program Corporate Sponsored Events & Community Outreach Emergency Back-Up Childcare Program Mobility Stipend About Guidehouse Guidehouse is an Equal Opportunity Employer-Protected Veterans, Individuals with Disabilities or any other basis protected by law, ordinance, or regulation. Guidehouse will consider for employment qualified applicants with criminal histories in a manner consistent with the requirements of applicable law or ordinance including the Fair Chance Ordinance of Los Angeles and San Francisco. If you have visited our website for information about employment opportunities, or to apply for a position, and you require an accommodation, please contact Guidehouse Recruiting at ************** or via email at RecruitingAccommodation@guidehouse.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodation. All communication regarding recruitment for a Guidehouse position will be sent from Guidehouse email domains including @guidehouse.com or ************************. Correspondence received by an applicant from any other domain should be considered unauthorized and will not be honored by Guidehouse. Note that Guidehouse will never charge a fee or require a money transfer at any stage of the recruitment process and does not collect fees from educational institutions for participation in a recruitment event. Never provide your banking information to a third party purporting to need that information to proceed in the hiring process. If any person or organization demands money related to a job opportunity with Guidehouse, please report the matter to Guidehouse's Ethics Hotline. If you want to check the validity of correspondence you have received, please contact *************************. Guidehouse is not responsible for losses incurred (monetary or otherwise) from an applicant's dealings with unauthorized third parties. Guidehouse does not accept unsolicited resumes through or from search firms or staffing agencies. All unsolicited resumes will be considered the property of Guidehouse and Guidehouse will not be obligated to pay a placement fee.
    $149k-248k yearly Auto-Apply 22h ago
  • Data Modeler II

    Airswift 4.9company rating

    Houston, TX jobs

    Job Title: Data Modeler II Type: W2 Contract (USA)/INC or T4 (Canada) Work Setup: Hybrid (On-site with flexibility to work from home two days per week) Industry: Oil & Gas Benefits: Health, Dental, Vision Job Summary We are seeking a Data Modeler II with a product-driven, innovative mindset to design and implement data solutions that deliver measurable business value for Supply Chain operations. This role combines technical expertise with project management responsibilities, requiring collaboration with IT teams to develop solutions for small and medium-sized business challenges. The ideal candidate will have hands-on experience with data transformation, AI integration, and ERP systems, while also being able to communicate technical concepts in clear, business-friendly language. Key Responsibilities Develop innovative data solutions leveraging knowledge of Supply Chain processes and oil & gas industry value drivers. Design and optimize ETL pipelines for scalable, high-performance data processing. Integrate solutions with enterprise data platforms and visualization tools. Gather and clean data from ERP systems for analytics and reporting. Utilize AI tools and prompt engineering to enhance data-driven solutions. Collaborate with IT and business stakeholders to deliver medium and low-level solutions for local issues. Oversee project timelines, resources, and stakeholder engagement. Document project objectives, requirements, and progress updates. Translate technical language into clear, non-technical terms for business users. Support continuous improvement and innovation in data engineering and analytics. Basic / Required Qualifications Bachelor's degree in Commerce (SCM), Data Science, Engineering, or related field. Hands-on experience with: Python for data transformation. ETL tools (Power Automate, Power Apps; Databricks is a plus). Oracle Cloud (Supply Chain and Financial modules). Knowledge of ERP systems (Oracle Cloud required; SAP preferred). Familiarity with AI integration and low-code development platforms. Strong understanding of Supply Chain processes; oil & gas experience preferred. Ability to manage projects and engage stakeholders effectively. Excellent communication skills for translating technical concepts into business language. Required Knowledge / Skills / Abilities Advanced proficiency in data science concepts, including statistical analysis and machine learning. Experience with prompt engineering and AI-driven solutions. Ability to clean and transform data for analytics and reporting. Strong documentation, troubleshooting, and analytical skills. Business-focused mindset with technical expertise. Ability to think outside the box and propose innovative solutions. Special Job Characteristics Hybrid work schedule (Wednesdays and Fridays remote). Ability to work independently and oversee own projects.
    $82k-115k yearly est. 2d ago
  • Oracle Data Analyst (Exadata)

    Yoh, A Day & Zimmermann Company 4.7company rating

    Dallas, TX jobs

    6+ month contract Downtown Dallas, TX (Onsite) Primary responsibilities of the Senior Data Analyst include supporting and analyzing data anomalies for multiple environments including but not limited to Data Warehouse, ODS, Data Replication/ETL Data Management initiatives. The candidate will be in a supporting role and will work closely with Business, DBA, ETL and Data Management team providing analysis and support for complex Data related initiatives. This individual will also be responsible for assisting in initial setup and on-going documentation/configuration related to Data Governance and Master Data Management solutions. This candidate must have a passion for data, along with good SQL, analytical and communication skills. Responsibilities Investigate and Analyze data anomalies and data issues reported by Business Work with ETL, Replication and DBA teams to determine data transformations, data movement and derivations and document accordingly Work with support teams to ensure consistent and proactive support methodologies are adhered to for all aspects of data movements and data transformations Assist in break fix and production validation as it relates to data derivations, replication and structures Assist in configuration and on-going setup of Data Virtualization and Master Data Management tools Assist in keeping documentation up to date as it relates to Data Standardization definitions, Data Dictionary and Data Lineage Gather information from various Sources and interpret Patterns and Trends Ability to work in a team-oriented, fast-paced agile environment managing multiple priorities Qualifications 4+ years of experience working in OLTP, Data Warehouse and Big Data databases 4+ years of experience working with Oracle Exadata 4+ years in a Data Analyst role 2+ years writing medium to complex stored procedures a plus Ability to collaborate effectively and work as part of a team Extensive background in writing complex queries Extensive working knowledge of all aspects of Data Movement and Processing, including ETL, API, OLAP and best practices for data tracking Denodo Experience a plus Master Data Management a plus Big Data Experience a plus (Hadoop, MongoDB) Postgres and Cloud Experience a plus Estimated Min Rate: $57.40 Estimated Max Rate: $82.00 What's In It for You? We welcome you to be a part of the largest and legendary global staffing companies to meet your career aspirations. Yoh's network of client companies has been employing professionals like you for over 65 years in the U.S., UK and Canada. Join Yoh's extensive talent community that will provide you with access to Yoh's vast network of opportunities and gain access to this exclusive opportunity available to you. Benefit eligibility is in accordance with applicable laws and client requirements. Benefits include: Medical, Prescription, Dental & Vision Benefits (for employees working 20+ hours per week) Health Savings Account (HSA) (for employees working 20+ hours per week) Life & Disability Insurance (for employees working 20+ hours per week) MetLife Voluntary Benefits Employee Assistance Program (EAP) 401K Retirement Savings Plan Direct Deposit & weekly epayroll Referral Bonus Programs Certification and training opportunities Note: Any pay ranges displayed are estimations. Actual pay is determined by an applicant's experience, technical expertise, and other qualifications as listed in the job description. All qualified applicants are welcome to apply. Yoh, a Day & Zimmermann company, is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran. Visit ************************************************ to contact us if you are an individual with a disability and require accommodation in the application process. For California applicants, qualified applicants with arrest or conviction records will be considered for employment in accordance with the Los Angeles County Fair Chance Ordinance for Employers and the California Fair Chance Act. All of the material job duties described in this posting are job duties for which a criminal history may have a direct, adverse, and negative relationship potentially resulting in the withdrawal of a conditional offer of employment. It is unlawful in Massachusetts to require or administer a lie detector test as a condition of employment or continued employment. An employer who violates this law shall be subject to criminal penalties and civil liability. By applying and submitting your resume, you authorize Yoh to review and reformat your resume to meet Yoh's hiring clients' preferences. To learn more about Yoh's privacy practices, please see our Candidate Privacy Notice: **********************************
    $57.4 hourly 1d ago
  • Data Architect - Azure Databricks

    Fractal 4.2company rating

    Palo Alto, CA jobs

    Fractal is a strategic AI partner to Fortune 500 companies with a vision to power every human decision in the enterprise. Fractal is building a world where individual choices, freedom, and diversity are the greatest assets; an ecosystem where human imagination is at the heart of every decision. Where no possibility is written off, only challenged to get better. We believe that a true Fractalite is the one who empowers imagination with intelligence. Fractal has been featured as a Great Place to Work by The Economic Times in partnership with the Great Place to Work Institute and recognized as a ‘Cool Vendor' and a ‘Vendor to Watch' by Gartner. Please visit Fractal | Intelligence for Imagination for more information about Fractal. Job Posting Title: Principal Architect - Azure Databricks Job Description Seeking a visionary and hands-on Principal Architect to lead large-scale, complex technical initiatives leveraging Databricks within the healthcare payer domain. This role is pivotal in driving data modernization, advanced analytics, and AI/ML solutions for our clients. You will serve as a strategic advisor, technical leader, and delivery expert across multiple engagements. Responsibilities: Design & Architecture of Scalable Data Platforms Design, develop, and maintain large-scale data processing architectures on the Databricks Lakehouse Platform to support business needs such as sales forecasting, trade promotions, supply chain optimization etc... Architect multi-layer data models including Bronze (raw), Silver (cleansed), and Gold (curated) layers for various domains (e.g., Retail Execution, Digital Commerce, Logistics, Category Management). Leverage Delta Lake, Unity Catalog, and advanced features of Databricks for governed data sharing, versioning, and reproducibility. Client & Business Stakeholder Engagement Partner with business stakeholders to translate functional requirements into scalable technical solutions. Conduct architecture workshops and solutioning sessions with enterprise IT and business teams to define data-driven use cases Data Pipeline Development & Collaboration Collaborate with data engineers and data scientists to develop end-to-end pipelines using PySpark, SQL, DLT (Delta Live Tables), and Databricks Workflows. Enable data ingestion from diverse sources such as ERP (SAP), POS data, Syndicated Data, CRM, e-commerce platforms, and third-party datasets. Performance, Scalability, and Reliability Optimize Spark jobs for performance tuning, cost efficiency, and scalability by configuring appropriate cluster sizing, caching, and query optimization techniques. Implement monitoring and alerting using Databricks Observability, Ganglia, Cloud-native tools Security, Compliance & Governance Design secure architectures using Unity Catalog, role-based access control (RBAC), encryption, token-based access, and data lineage tools to meet compliance policies. Establish data governance practices including Data Fitness Index, Quality Scores, SLA Monitoring, and Metadata Cataloging. Adoption of AI Copilots & Agentic Development Utilize GitHub Copilot, Databricks Assistant, and other AI code agents for: Writing PySpark, SQL, and Python code snippets for data engineering and ML tasks. Generating documentation and test cases to accelerate pipeline development. Interactive debugging and iterative code optimization within notebooks. Advocate for agentic AI workflows that use specialized agents for: Data profiling and schema inference. Automated testing and validation. Innovation and Continuous Learning Stay abreast of emerging trends in Lakehouse architectures, Generative AI, and cloud-native tooling. Evaluate and pilot new features from Databricks releases and partner integrations for modern data stack improvements. Requirements: Bachelor's or master's degree in computer science, Information Technology, or a related field. 12-18 years of hands-on experience in data engineering, with at least 5+ years on Databricks Architecture and Apache Spark. Expertise in building high-throughput, low-latency ETL/ELT pipelines on Azure Databricks using PySpark, SQL, and Databricks-native features. Familiarity with ingestion frameworks from structured/unstructured data sources including APIs, flat files, RDBMS, and cloud storage (Azure Data Lake Storage Gen2) Experience designing Lakehouse architectures with bronze, silver, gold layering. Expertise in optimizing Databricks performance using Delta Lake features such as OPTIMIZE, VACUUM, ZORDER, and Time Travel Strong understanding of data modelling concepts, star/snowflake schemas, dimensional modelling, and modern cloud-based data warehousing. Experience with designing Data marts using Databricks SQL warehouse and integrating with BI tools (Power BI, Tableau, etc.). Hands-on experience designing solutions using Workflows (Jobs), Delta Lake, Delta Live Tables (DLT), Unity Catalog, and MLflow. Familiarity with Databricks REST APIs, Notebooks, and cluster configurations for automated provisioning and orchestration. Experience in integrating Databricks with CI/CD pipelines using tools such as Azure DevOps, GitHub Actions. Knowledge of infrastructure-as-code (Terraform, ARM templates) for provisioning Databricks workspaces and resources In-depth experience with Azure Cloud services such as ADF, Synapse, ADLS, Key Vault, Azure Monitor, and Azure Security Centre. Strong understanding of data privacy, access controls, and governance best practices. Experience working with Unity Catalog, RBAC, tokenization, and data classification frameworks Worked as a consultant for more than 4-5 years with multiple clients Contribute to pre-sales, proposals, and client presentations as a subject matter expert. Participated and Lead RFP responses for your organization. Experience in providing solutions for technical problems and provide cost estimates Excellent communication skills for stakeholder interaction, solution presentations, and team coordination. Proven experience leading or mentoring global, cross-functional teams across multiple time zones and engagements. Ability to work independently in agile or hybrid delivery models, while guiding junior engineers and ensuring solution quality. Pay: The wage range for this role takes into account the wide range of factors that are considered in making compensation decisions including but not limited to skill sets; experience and training; licensure and certifications; and other business and organizational needs. The disclosed range estimate has not been adjusted for the applicable geographic differential associated with the location at which the position may be filled. At Fractal, it is not typical for an individual to be hired at or near the top of the range for their role and compensation decisions are dependent on the facts and circumstances of each case. A reasonable estimate of the current range is: $ 200,000 - $300,000. In addition, you may be eligible for a discretionary bonus for the current performance period. Benefits: As a full-time employee of the company or as an hourly employee working more than 30 hours per week, you will be eligible to participate in the health, dental, vision, life insurance, and disability plans in accordance with the plan documents, which may be amended from time to time. You will be eligible for benefits on the first day of employment with the Company. In addition, you are eligible to participate in the Company 401(k) Plan after 30 days of employment, in accordance with the applicable plan terms. The Company provides for 11 paid holidays and 12 weeks of Parental Leave. We also follow a “free time” PTO policy, allowing you the flexibility to take time needed for either sick time or vacation. Fractal provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
    $200k-300k yearly 4d ago
  • Data Modeler

    Airswift 4.9company rating

    Midland, TX jobs

    Job Title: Data Modeler - Net Zero Program Analyst Type: W2 Contract (12-month duration) Work Setup: On-site Industry: Oil & Gas Benefits: Dental, Healthcare, Vision &401(k) Airswift is seeking a Data Modeler - Net Zero Program Analyst to join one of our major clients on a 12-month contract. This newly created role supports the company's decarbonization and Net Zero initiatives by managing and analyzing operational data to identify trends and optimize performance. The position involves working closely with operations and analytics teams to deliver actionable insights through data visualization and reporting. Responsibilities: Build and maintain Power BI dashboards to monitor emissions, operational metrics, and facility performance. Extract and organize data from systems such as SiteView, ProCount, and SAP for analysis and reporting. Conduct data validation and trend analysis to support sustainability and operational goals. Collaborate with field operations and project teams to interpret data and provide recommendations. Ensure data consistency across platforms and assist with integration efforts (coordination only, no coding required). Present findings through clear reports and visualizations for technical and non-technical stakeholders. Required Skills and Experience: 7+ years of experience in data analysis within Oil & Gas or Energy sectors. Strong proficiency in Power BI (required). Familiarity with SiteView, ProCount, and/or SAP (preferred). Ability to translate operational data into insights that support emissions reduction and facility optimization. Experience with surface facilities, emissions estimation, or power systems. Knowledge of other visualization tools (Tableau, Spotfire) is a plus. High School Diploma or GED required. Additional Details: Preference for Midland-based candidates; Houston-based candidates will need to travel to Midland periodically (travel reimbursed). No per diem offered. Office-based role with low exposure risk.
    $83k-116k yearly est. 4d ago
  • AWS Data Architect

    Fractal 4.2company rating

    San Jose, CA jobs

    Fractal is a strategic AI partner to Fortune 500 companies with a vision to power every human decision in the enterprise. Fractal is building a world where individual choices, freedom, and diversity are the greatest assets; an ecosystem where human imagination is at the heart of every decision. Where no possibility is written off, only challenged to get better. We believe that a true Fractalite is the one who empowers imagination with intelligence. Fractal has been featured as a Great Place to Work by The Economic Times in partnership with the Great Place to Work Institute and recognized as a ‘Cool Vendor' and a ‘Vendor to Watch' by Gartner. Please visit Fractal | Intelligence for Imagination for more information about Fractal. Fractal is looking for a proactive and driven AWS Lead Data Architect/Engineer to join our cloud and data tech team. In this role, you will work on designing the system architecture and solution, ensuring the platform is scalable while performant, and creating automated data pipelines. Responsibilities: Design & Architecture of Scalable Data Platforms Design, develop, and maintain large-scale data processing architectures on the Databricks Lakehouse Platform to support business needs Architect multi-layer data models including Bronze (raw), Silver (cleansed), and Gold (curated) layers for various domains (e.g., Retail Execution, Digital Commerce, Logistics, Category Management). Leverage Delta Lake, Unity Catalog, and advanced features of Databricks for governed data sharing, versioning, and reproducibility. Client & Business Stakeholder Engagement Partner with business stakeholders to translate functional requirements into scalable technical solutions. Conduct architecture workshops and solutioning sessions with enterprise IT and business teams to define data-driven use cases Data Pipeline Development & Collaboration Collaborate with data engineers and data scientists to develop end-to-end pipelines using Python, PySpark, SQL Enable data ingestion from diverse sources such as ERP (SAP), POS data, Syndicated Data, CRM, e-commerce platforms, and third-party datasets. Performance, Scalability, and Reliability Optimize Spark jobs for performance tuning, cost efficiency, and scalability by configuring appropriate cluster sizing, caching, and query optimization techniques. Implement monitoring and alerting using Databricks Observability, Ganglia, Cloud-native tools Security, Compliance & Governance Design secure architectures using Unity Catalog, role-based access control (RBAC), encryption, token-based access, and data lineage tools to meet compliance policies. Establish data governance practices including Data Fitness Index, Quality Scores, SLA Monitoring, and Metadata Cataloging. Adoption of AI Copilots & Agentic Development Utilize GitHub Copilot, Databricks Assistant, and other AI code agents for Writing PySpark, SQL, and Python code snippets for data engineering and ML tasks. Generating documentation and test cases to accelerate pipeline development. Interactive debugging and iterative code optimization within notebooks. Advocate for agentic AI workflows that use specialized agents for Data profiling and schema inference. Automated testing and validation. Innovation and Continuous Learning Stay abreast of emerging trends in Lakehouse architectures, Generative AI, and cloud-native tooling. Evaluate and pilot new features from Databricks releases and partner integrations for modern data stack improvements. Requirements: Bachelor's or master's degree in computer science, Information Technology, or a related field. 8-12 years of hands-on experience in data engineering, with at least 5+ years on Python and Apache Spark. Expertise in building high-throughput, low-latency ETL/ELT pipelines on AWS/Azure/GCP using Python, PySpark, SQL. Excellent hands on experience with workload automation tools such as Airflow, Prefect etc. Familiarity with building dynamic ingestion frameworks from structured/unstructured data sources including APIs, flat files, RDBMS, and cloud storage Experience designing Lakehouse architectures with bronze, silver, gold layering. Strong understanding of data modelling concepts, star/snowflake schemas, dimensional modelling, and modern cloud-based data warehousing. Experience with designing Data marts using Cloud data warehouses and integrating with BI tools (Power BI, Tableau, etc.). Experience CI/CD pipelines using tools such as AWS Code commit, Azure DevOps, GitHub Actions. Knowledge of infrastructure-as-code (Terraform, ARM templates) for provisioning platform resources In-depth experience with AWS Cloud services such as Glue, S3, Redshift etc. Strong understanding of data privacy, access controls, and governance best practices. Experience working with RBAC, tokenization, and data classification frameworks Excellent communication skills for stakeholder interaction, solution presentations, and team coordination. Proven experience leading or mentoring global, cross-functional teams across multiple time zones and engagements. Ability to work independently in agile or hybrid delivery models, while guiding junior engineers and ensuring solution quality Must be able to work in PST time zone. Pay: The wage range for this role takes into account the wide range of factors that are considered in making compensation decisions including but not limited to skill sets; experience and training; licensure and certifications; and other business and organizational needs. The disclosed range estimate has not been adjusted for the applicable geographic differential associated with the location at which the position may be filled. At Fractal, it is not typical for an individual to be hired at or near the top of the range for their role and compensation decisions are dependent on the facts and circumstances of each case. A reasonable estimate of the current range is: $150k - $180k. In addition, you may be eligible for a discretionary bonus for the current performance period. Benefits: As a full-time employee of the company or as an hourly employee working more than 30 hours per week, you will be eligible to participate in the health, dental, vision, life insurance, and disability plans in accordance with the plan documents, which may be amended from time to time. You will be eligible for benefits on the first day of employment with the Company. In addition, you are eligible to participate in the Company 401(k) Plan after 30 days of employment, in accordance with the applicable plan terms. The Company provides for 11 paid holidays and 12 weeks of Parental Leave. We also follow a “free time” PTO policy, allowing you the flexibility to take the time needed for either sick time or vacation. Fractal provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
    $150k-180k yearly 22h ago
  • AWS Data Architect

    Fractal 4.2company rating

    Santa Rosa, CA jobs

    Fractal is a strategic AI partner to Fortune 500 companies with a vision to power every human decision in the enterprise. Fractal is building a world where individual choices, freedom, and diversity are the greatest assets; an ecosystem where human imagination is at the heart of every decision. Where no possibility is written off, only challenged to get better. We believe that a true Fractalite is the one who empowers imagination with intelligence. Fractal has been featured as a Great Place to Work by The Economic Times in partnership with the Great Place to Work Institute and recognized as a ‘Cool Vendor' and a ‘Vendor to Watch' by Gartner. Please visit Fractal | Intelligence for Imagination for more information about Fractal. Fractal is looking for a proactive and driven AWS Lead Data Architect/Engineer to join our cloud and data tech team. In this role, you will work on designing the system architecture and solution, ensuring the platform is scalable while performant, and creating automated data pipelines. Responsibilities: Design & Architecture of Scalable Data Platforms Design, develop, and maintain large-scale data processing architectures on the Databricks Lakehouse Platform to support business needs Architect multi-layer data models including Bronze (raw), Silver (cleansed), and Gold (curated) layers for various domains (e.g., Retail Execution, Digital Commerce, Logistics, Category Management). Leverage Delta Lake, Unity Catalog, and advanced features of Databricks for governed data sharing, versioning, and reproducibility. Client & Business Stakeholder Engagement Partner with business stakeholders to translate functional requirements into scalable technical solutions. Conduct architecture workshops and solutioning sessions with enterprise IT and business teams to define data-driven use cases Data Pipeline Development & Collaboration Collaborate with data engineers and data scientists to develop end-to-end pipelines using Python, PySpark, SQL Enable data ingestion from diverse sources such as ERP (SAP), POS data, Syndicated Data, CRM, e-commerce platforms, and third-party datasets. Performance, Scalability, and Reliability Optimize Spark jobs for performance tuning, cost efficiency, and scalability by configuring appropriate cluster sizing, caching, and query optimization techniques. Implement monitoring and alerting using Databricks Observability, Ganglia, Cloud-native tools Security, Compliance & Governance Design secure architectures using Unity Catalog, role-based access control (RBAC), encryption, token-based access, and data lineage tools to meet compliance policies. Establish data governance practices including Data Fitness Index, Quality Scores, SLA Monitoring, and Metadata Cataloging. Adoption of AI Copilots & Agentic Development Utilize GitHub Copilot, Databricks Assistant, and other AI code agents for Writing PySpark, SQL, and Python code snippets for data engineering and ML tasks. Generating documentation and test cases to accelerate pipeline development. Interactive debugging and iterative code optimization within notebooks. Advocate for agentic AI workflows that use specialized agents for Data profiling and schema inference. Automated testing and validation. Innovation and Continuous Learning Stay abreast of emerging trends in Lakehouse architectures, Generative AI, and cloud-native tooling. Evaluate and pilot new features from Databricks releases and partner integrations for modern data stack improvements. Requirements: Bachelor's or master's degree in computer science, Information Technology, or a related field. 8-12 years of hands-on experience in data engineering, with at least 5+ years on Python and Apache Spark. Expertise in building high-throughput, low-latency ETL/ELT pipelines on AWS/Azure/GCP using Python, PySpark, SQL. Excellent hands on experience with workload automation tools such as Airflow, Prefect etc. Familiarity with building dynamic ingestion frameworks from structured/unstructured data sources including APIs, flat files, RDBMS, and cloud storage Experience designing Lakehouse architectures with bronze, silver, gold layering. Strong understanding of data modelling concepts, star/snowflake schemas, dimensional modelling, and modern cloud-based data warehousing. Experience with designing Data marts using Cloud data warehouses and integrating with BI tools (Power BI, Tableau, etc.). Experience CI/CD pipelines using tools such as AWS Code commit, Azure DevOps, GitHub Actions. Knowledge of infrastructure-as-code (Terraform, ARM templates) for provisioning platform resources In-depth experience with AWS Cloud services such as Glue, S3, Redshift etc. Strong understanding of data privacy, access controls, and governance best practices. Experience working with RBAC, tokenization, and data classification frameworks Excellent communication skills for stakeholder interaction, solution presentations, and team coordination. Proven experience leading or mentoring global, cross-functional teams across multiple time zones and engagements. Ability to work independently in agile or hybrid delivery models, while guiding junior engineers and ensuring solution quality Must be able to work in PST time zone. Pay: The wage range for this role takes into account the wide range of factors that are considered in making compensation decisions including but not limited to skill sets; experience and training; licensure and certifications; and other business and organizational needs. The disclosed range estimate has not been adjusted for the applicable geographic differential associated with the location at which the position may be filled. At Fractal, it is not typical for an individual to be hired at or near the top of the range for their role and compensation decisions are dependent on the facts and circumstances of each case. A reasonable estimate of the current range is: $150k - $180k. In addition, you may be eligible for a discretionary bonus for the current performance period. Benefits: As a full-time employee of the company or as an hourly employee working more than 30 hours per week, you will be eligible to participate in the health, dental, vision, life insurance, and disability plans in accordance with the plan documents, which may be amended from time to time. You will be eligible for benefits on the first day of employment with the Company. In addition, you are eligible to participate in the Company 401(k) Plan after 30 days of employment, in accordance with the applicable plan terms. The Company provides for 11 paid holidays and 12 weeks of Parental Leave. We also follow a “free time” PTO policy, allowing you the flexibility to take the time needed for either sick time or vacation. Fractal provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
    $150k-180k yearly 22h ago
  • AWS Data Architect

    Fractal 4.2company rating

    San Francisco, CA jobs

    Fractal is a strategic AI partner to Fortune 500 companies with a vision to power every human decision in the enterprise. Fractal is building a world where individual choices, freedom, and diversity are the greatest assets; an ecosystem where human imagination is at the heart of every decision. Where no possibility is written off, only challenged to get better. We believe that a true Fractalite is the one who empowers imagination with intelligence. Fractal has been featured as a Great Place to Work by The Economic Times in partnership with the Great Place to Work Institute and recognized as a ‘Cool Vendor' and a ‘Vendor to Watch' by Gartner. Please visit Fractal | Intelligence for Imagination for more information about Fractal. Fractal is looking for a proactive and driven AWS Lead Data Architect/Engineer to join our cloud and data tech team. In this role, you will work on designing the system architecture and solution, ensuring the platform is scalable while performant, and creating automated data pipelines. Responsibilities: Design & Architecture of Scalable Data Platforms Design, develop, and maintain large-scale data processing architectures on the Databricks Lakehouse Platform to support business needs Architect multi-layer data models including Bronze (raw), Silver (cleansed), and Gold (curated) layers for various domains (e.g., Retail Execution, Digital Commerce, Logistics, Category Management). Leverage Delta Lake, Unity Catalog, and advanced features of Databricks for governed data sharing, versioning, and reproducibility. Client & Business Stakeholder Engagement Partner with business stakeholders to translate functional requirements into scalable technical solutions. Conduct architecture workshops and solutioning sessions with enterprise IT and business teams to define data-driven use cases Data Pipeline Development & Collaboration Collaborate with data engineers and data scientists to develop end-to-end pipelines using Python, PySpark, SQL Enable data ingestion from diverse sources such as ERP (SAP), POS data, Syndicated Data, CRM, e-commerce platforms, and third-party datasets. Performance, Scalability, and Reliability Optimize Spark jobs for performance tuning, cost efficiency, and scalability by configuring appropriate cluster sizing, caching, and query optimization techniques. Implement monitoring and alerting using Databricks Observability, Ganglia, Cloud-native tools Security, Compliance & Governance Design secure architectures using Unity Catalog, role-based access control (RBAC), encryption, token-based access, and data lineage tools to meet compliance policies. Establish data governance practices including Data Fitness Index, Quality Scores, SLA Monitoring, and Metadata Cataloging. Adoption of AI Copilots & Agentic Development Utilize GitHub Copilot, Databricks Assistant, and other AI code agents for Writing PySpark, SQL, and Python code snippets for data engineering and ML tasks. Generating documentation and test cases to accelerate pipeline development. Interactive debugging and iterative code optimization within notebooks. Advocate for agentic AI workflows that use specialized agents for Data profiling and schema inference. Automated testing and validation. Innovation and Continuous Learning Stay abreast of emerging trends in Lakehouse architectures, Generative AI, and cloud-native tooling. Evaluate and pilot new features from Databricks releases and partner integrations for modern data stack improvements. Requirements: Bachelor's or master's degree in computer science, Information Technology, or a related field. 8-12 years of hands-on experience in data engineering, with at least 5+ years on Python and Apache Spark. Expertise in building high-throughput, low-latency ETL/ELT pipelines on AWS/Azure/GCP using Python, PySpark, SQL. Excellent hands on experience with workload automation tools such as Airflow, Prefect etc. Familiarity with building dynamic ingestion frameworks from structured/unstructured data sources including APIs, flat files, RDBMS, and cloud storage Experience designing Lakehouse architectures with bronze, silver, gold layering. Strong understanding of data modelling concepts, star/snowflake schemas, dimensional modelling, and modern cloud-based data warehousing. Experience with designing Data marts using Cloud data warehouses and integrating with BI tools (Power BI, Tableau, etc.). Experience CI/CD pipelines using tools such as AWS Code commit, Azure DevOps, GitHub Actions. Knowledge of infrastructure-as-code (Terraform, ARM templates) for provisioning platform resources In-depth experience with AWS Cloud services such as Glue, S3, Redshift etc. Strong understanding of data privacy, access controls, and governance best practices. Experience working with RBAC, tokenization, and data classification frameworks Excellent communication skills for stakeholder interaction, solution presentations, and team coordination. Proven experience leading or mentoring global, cross-functional teams across multiple time zones and engagements. Ability to work independently in agile or hybrid delivery models, while guiding junior engineers and ensuring solution quality Must be able to work in PST time zone. Pay: The wage range for this role takes into account the wide range of factors that are considered in making compensation decisions including but not limited to skill sets; experience and training; licensure and certifications; and other business and organizational needs. The disclosed range estimate has not been adjusted for the applicable geographic differential associated with the location at which the position may be filled. At Fractal, it is not typical for an individual to be hired at or near the top of the range for their role and compensation decisions are dependent on the facts and circumstances of each case. A reasonable estimate of the current range is: $150k - $180k. In addition, you may be eligible for a discretionary bonus for the current performance period. Benefits: As a full-time employee of the company or as an hourly employee working more than 30 hours per week, you will be eligible to participate in the health, dental, vision, life insurance, and disability plans in accordance with the plan documents, which may be amended from time to time. You will be eligible for benefits on the first day of employment with the Company. In addition, you are eligible to participate in the Company 401(k) Plan after 30 days of employment, in accordance with the applicable plan terms. The Company provides for 11 paid holidays and 12 weeks of Parental Leave. We also follow a “free time” PTO policy, allowing you the flexibility to take the time needed for either sick time or vacation. Fractal provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
    $150k-180k yearly 22h ago
  • AWS Data Architect

    Fractal 4.2company rating

    Sunnyvale, CA jobs

    Fractal is a strategic AI partner to Fortune 500 companies with a vision to power every human decision in the enterprise. Fractal is building a world where individual choices, freedom, and diversity are the greatest assets; an ecosystem where human imagination is at the heart of every decision. Where no possibility is written off, only challenged to get better. We believe that a true Fractalite is the one who empowers imagination with intelligence. Fractal has been featured as a Great Place to Work by The Economic Times in partnership with the Great Place to Work Institute and recognized as a ‘Cool Vendor' and a ‘Vendor to Watch' by Gartner. Please visit Fractal | Intelligence for Imagination for more information about Fractal. Fractal is looking for a proactive and driven AWS Lead Data Architect/Engineer to join our cloud and data tech team. In this role, you will work on designing the system architecture and solution, ensuring the platform is scalable while performant, and creating automated data pipelines. Responsibilities: Design & Architecture of Scalable Data Platforms Design, develop, and maintain large-scale data processing architectures on the Databricks Lakehouse Platform to support business needs Architect multi-layer data models including Bronze (raw), Silver (cleansed), and Gold (curated) layers for various domains (e.g., Retail Execution, Digital Commerce, Logistics, Category Management). Leverage Delta Lake, Unity Catalog, and advanced features of Databricks for governed data sharing, versioning, and reproducibility. Client & Business Stakeholder Engagement Partner with business stakeholders to translate functional requirements into scalable technical solutions. Conduct architecture workshops and solutioning sessions with enterprise IT and business teams to define data-driven use cases Data Pipeline Development & Collaboration Collaborate with data engineers and data scientists to develop end-to-end pipelines using Python, PySpark, SQL Enable data ingestion from diverse sources such as ERP (SAP), POS data, Syndicated Data, CRM, e-commerce platforms, and third-party datasets. Performance, Scalability, and Reliability Optimize Spark jobs for performance tuning, cost efficiency, and scalability by configuring appropriate cluster sizing, caching, and query optimization techniques. Implement monitoring and alerting using Databricks Observability, Ganglia, Cloud-native tools Security, Compliance & Governance Design secure architectures using Unity Catalog, role-based access control (RBAC), encryption, token-based access, and data lineage tools to meet compliance policies. Establish data governance practices including Data Fitness Index, Quality Scores, SLA Monitoring, and Metadata Cataloging. Adoption of AI Copilots & Agentic Development Utilize GitHub Copilot, Databricks Assistant, and other AI code agents for Writing PySpark, SQL, and Python code snippets for data engineering and ML tasks. Generating documentation and test cases to accelerate pipeline development. Interactive debugging and iterative code optimization within notebooks. Advocate for agentic AI workflows that use specialized agents for Data profiling and schema inference. Automated testing and validation. Innovation and Continuous Learning Stay abreast of emerging trends in Lakehouse architectures, Generative AI, and cloud-native tooling. Evaluate and pilot new features from Databricks releases and partner integrations for modern data stack improvements. Requirements: Bachelor's or master's degree in computer science, Information Technology, or a related field. 8-12 years of hands-on experience in data engineering, with at least 5+ years on Python and Apache Spark. Expertise in building high-throughput, low-latency ETL/ELT pipelines on AWS/Azure/GCP using Python, PySpark, SQL. Excellent hands on experience with workload automation tools such as Airflow, Prefect etc. Familiarity with building dynamic ingestion frameworks from structured/unstructured data sources including APIs, flat files, RDBMS, and cloud storage Experience designing Lakehouse architectures with bronze, silver, gold layering. Strong understanding of data modelling concepts, star/snowflake schemas, dimensional modelling, and modern cloud-based data warehousing. Experience with designing Data marts using Cloud data warehouses and integrating with BI tools (Power BI, Tableau, etc.). Experience CI/CD pipelines using tools such as AWS Code commit, Azure DevOps, GitHub Actions. Knowledge of infrastructure-as-code (Terraform, ARM templates) for provisioning platform resources In-depth experience with AWS Cloud services such as Glue, S3, Redshift etc. Strong understanding of data privacy, access controls, and governance best practices. Experience working with RBAC, tokenization, and data classification frameworks Excellent communication skills for stakeholder interaction, solution presentations, and team coordination. Proven experience leading or mentoring global, cross-functional teams across multiple time zones and engagements. Ability to work independently in agile or hybrid delivery models, while guiding junior engineers and ensuring solution quality Must be able to work in PST time zone. Pay: The wage range for this role takes into account the wide range of factors that are considered in making compensation decisions including but not limited to skill sets; experience and training; licensure and certifications; and other business and organizational needs. The disclosed range estimate has not been adjusted for the applicable geographic differential associated with the location at which the position may be filled. At Fractal, it is not typical for an individual to be hired at or near the top of the range for their role and compensation decisions are dependent on the facts and circumstances of each case. A reasonable estimate of the current range is: $150k - $180k. In addition, you may be eligible for a discretionary bonus for the current performance period. Benefits: As a full-time employee of the company or as an hourly employee working more than 30 hours per week, you will be eligible to participate in the health, dental, vision, life insurance, and disability plans in accordance with the plan documents, which may be amended from time to time. You will be eligible for benefits on the first day of employment with the Company. In addition, you are eligible to participate in the Company 401(k) Plan after 30 days of employment, in accordance with the applicable plan terms. The Company provides for 11 paid holidays and 12 weeks of Parental Leave. We also follow a “free time” PTO policy, allowing you the flexibility to take the time needed for either sick time or vacation. Fractal provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
    $150k-180k yearly 22h ago
  • AWS Data Architect

    Fractal 4.2company rating

    Santa Clara, CA jobs

    Fractal is a strategic AI partner to Fortune 500 companies with a vision to power every human decision in the enterprise. Fractal is building a world where individual choices, freedom, and diversity are the greatest assets; an ecosystem where human imagination is at the heart of every decision. Where no possibility is written off, only challenged to get better. We believe that a true Fractalite is the one who empowers imagination with intelligence. Fractal has been featured as a Great Place to Work by The Economic Times in partnership with the Great Place to Work Institute and recognized as a ‘Cool Vendor' and a ‘Vendor to Watch' by Gartner. Please visit Fractal | Intelligence for Imagination for more information about Fractal. Fractal is looking for a proactive and driven AWS Lead Data Architect/Engineer to join our cloud and data tech team. In this role, you will work on designing the system architecture and solution, ensuring the platform is scalable while performant, and creating automated data pipelines. Responsibilities: Design & Architecture of Scalable Data Platforms Design, develop, and maintain large-scale data processing architectures on the Databricks Lakehouse Platform to support business needs Architect multi-layer data models including Bronze (raw), Silver (cleansed), and Gold (curated) layers for various domains (e.g., Retail Execution, Digital Commerce, Logistics, Category Management). Leverage Delta Lake, Unity Catalog, and advanced features of Databricks for governed data sharing, versioning, and reproducibility. Client & Business Stakeholder Engagement Partner with business stakeholders to translate functional requirements into scalable technical solutions. Conduct architecture workshops and solutioning sessions with enterprise IT and business teams to define data-driven use cases Data Pipeline Development & Collaboration Collaborate with data engineers and data scientists to develop end-to-end pipelines using Python, PySpark, SQL Enable data ingestion from diverse sources such as ERP (SAP), POS data, Syndicated Data, CRM, e-commerce platforms, and third-party datasets. Performance, Scalability, and Reliability Optimize Spark jobs for performance tuning, cost efficiency, and scalability by configuring appropriate cluster sizing, caching, and query optimization techniques. Implement monitoring and alerting using Databricks Observability, Ganglia, Cloud-native tools Security, Compliance & Governance Design secure architectures using Unity Catalog, role-based access control (RBAC), encryption, token-based access, and data lineage tools to meet compliance policies. Establish data governance practices including Data Fitness Index, Quality Scores, SLA Monitoring, and Metadata Cataloging. Adoption of AI Copilots & Agentic Development Utilize GitHub Copilot, Databricks Assistant, and other AI code agents for Writing PySpark, SQL, and Python code snippets for data engineering and ML tasks. Generating documentation and test cases to accelerate pipeline development. Interactive debugging and iterative code optimization within notebooks. Advocate for agentic AI workflows that use specialized agents for Data profiling and schema inference. Automated testing and validation. Innovation and Continuous Learning Stay abreast of emerging trends in Lakehouse architectures, Generative AI, and cloud-native tooling. Evaluate and pilot new features from Databricks releases and partner integrations for modern data stack improvements. Requirements: Bachelor's or master's degree in computer science, Information Technology, or a related field. 8-12 years of hands-on experience in data engineering, with at least 5+ years on Python and Apache Spark. Expertise in building high-throughput, low-latency ETL/ELT pipelines on AWS/Azure/GCP using Python, PySpark, SQL. Excellent hands on experience with workload automation tools such as Airflow, Prefect etc. Familiarity with building dynamic ingestion frameworks from structured/unstructured data sources including APIs, flat files, RDBMS, and cloud storage Experience designing Lakehouse architectures with bronze, silver, gold layering. Strong understanding of data modelling concepts, star/snowflake schemas, dimensional modelling, and modern cloud-based data warehousing. Experience with designing Data marts using Cloud data warehouses and integrating with BI tools (Power BI, Tableau, etc.). Experience CI/CD pipelines using tools such as AWS Code commit, Azure DevOps, GitHub Actions. Knowledge of infrastructure-as-code (Terraform, ARM templates) for provisioning platform resources In-depth experience with AWS Cloud services such as Glue, S3, Redshift etc. Strong understanding of data privacy, access controls, and governance best practices. Experience working with RBAC, tokenization, and data classification frameworks Excellent communication skills for stakeholder interaction, solution presentations, and team coordination. Proven experience leading or mentoring global, cross-functional teams across multiple time zones and engagements. Ability to work independently in agile or hybrid delivery models, while guiding junior engineers and ensuring solution quality Must be able to work in PST time zone. Pay: The wage range for this role takes into account the wide range of factors that are considered in making compensation decisions including but not limited to skill sets; experience and training; licensure and certifications; and other business and organizational needs. The disclosed range estimate has not been adjusted for the applicable geographic differential associated with the location at which the position may be filled. At Fractal, it is not typical for an individual to be hired at or near the top of the range for their role and compensation decisions are dependent on the facts and circumstances of each case. A reasonable estimate of the current range is: $150k - $180k. In addition, you may be eligible for a discretionary bonus for the current performance period. Benefits: As a full-time employee of the company or as an hourly employee working more than 30 hours per week, you will be eligible to participate in the health, dental, vision, life insurance, and disability plans in accordance with the plan documents, which may be amended from time to time. You will be eligible for benefits on the first day of employment with the Company. In addition, you are eligible to participate in the Company 401(k) Plan after 30 days of employment, in accordance with the applicable plan terms. The Company provides for 11 paid holidays and 12 weeks of Parental Leave. We also follow a “free time” PTO policy, allowing you the flexibility to take the time needed for either sick time or vacation. Fractal provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
    $150k-180k yearly 22h ago
  • Oracle Data Modeler

    Yoh, A Day & Zimmermann Company 4.7company rating

    Dallas, TX jobs

    Oracle Data Modeler (Erwin) 6+ month contract (W2 ONLY - NO C-C) Downtown Dallas, TX (Onsite) Primary responsibilities of the Data Modeler include designing, developing, and maintaining enterprise-grade data models that support critical business initiatives, analytics, and operational systems. The ideal candidate is proficient in industry-standard data modeling tools (with hands-on expertise in Erwin Data Modeler) and has deep experience with Oracle databases. The candidate will also translate complex business requirements into robust, scalable, and normalized data models while ensuring alignment with data governance, performance, and integration standards. Responsibilities Design and develop conceptual, logical, and physical data models using Erwin Data Modeler (required). Generate, review, and optimize DDL (Data Definition Language) scripts for database objects (tables, views, indexes, constraints, partitions, etc.). Perform forward and reverse engineering of data models from existing Oracle and SQL Server databases. Collaborate with data architects, DBAs, ETL developers, and business stakeholders to gather and refine requirements. Ensure data models adhere to normalization standards (3NF/BCNF), data integrity, and referential integrity. Support dimensional modeling (star/snowflake schemas) for data warehousing and analytics use cases. Conduct model reviews, impact analysis, and version control using Erwin or comparable tools. Participate in data governance initiatives, including metadata management, naming standards, and lineage documentation. Optimize models for performance, scalability, and maintainability across large-scale environments. Assist in database migrations, schema comparisons, and synchronization between environments (Dev/QA/Prod). Assist in optimizing existing Data Solutions Follow Oncor's Data Governance Policy and Information Classification and Protection Policy. Participate in design reviews and take guidance from the Data Architecture team members. Qualifications 3+ years of hands-on data modeling experience in enterprise environments. Expert proficiency with Erwin Data Modeler (version 9.x or higher preferred) - including subject areas, model templates, and DDL generation. Advanced SQL skills and deep understanding of Oracle (11g/12c/19c/21c). Strong command of DDL - creating and modifying tables, indexes, constraints, sequences, synonyms, and materialized views. Solid grasp of database internals: indexing strategies, partitioning, clustering, and query execution plans. Experience with data modeling best practices: normalization, denormalization, surrogate keys, slowly changing dimensions (SCD), and data vault (a plus). Familiarity with version control (e.g., Git) and model comparison/diff tools. Excellent communication skills - ability to document models clearly and present to technical and non-technical audiences. Self-Motivated, with an ability to multi-task Capable of presenting to all levels of audiences Works well in a team environment Experience with Hadoop/MongoDB a plus Estimated Min Rate: $63.00 Estimated Max Rate: $90.00 What's In It for You? We welcome you to be a part of the largest and legendary global staffing companies to meet your career aspirations. Yoh's network of client companies has been employing professionals like you for over 65 years in the U.S., UK and Canada. Join Yoh's extensive talent community that will provide you with access to Yoh's vast network of opportunities and gain access to this exclusive opportunity available to you. Benefit eligibility is in accordance with applicable laws and client requirements. Benefits include: Medical, Prescription, Dental & Vision Benefits (for employees working 20+ hours per week) Health Savings Account (HSA) (for employees working 20+ hours per week) Life & Disability Insurance (for employees working 20+ hours per week) MetLife Voluntary Benefits Employee Assistance Program (EAP) 401K Retirement Savings Plan Direct Deposit & weekly epayroll Referral Bonus Programs Certification and training opportunities Note: Any pay ranges displayed are estimations. Actual pay is determined by an applicant's experience, technical expertise, and other qualifications as listed in the job description. All qualified applicants are welcome to apply. Yoh, a Day & Zimmermann company, is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran. Visit ************************************************ to contact us if you are an individual with a disability and require accommodation in the application process. For California applicants, qualified applicants with arrest or conviction records will be considered for employment in accordance with the Los Angeles County Fair Chance Ordinance for Employers and the California Fair Chance Act. All of the material job duties described in this posting are job duties for which a criminal history may have a direct, adverse, and negative relationship potentially resulting in the withdrawal of a conditional offer of employment. It is unlawful in Massachusetts to require or administer a lie detector test as a condition of employment or continued employment. An employer who violates this law shall be subject to criminal penalties and civil liability. By applying and submitting your resume, you authorize Yoh to review and reformat your resume to meet Yoh's hiring clients' preferences. To learn more about Yoh's privacy practices, please see our Candidate Privacy Notice: **********************************
    $63 hourly 1d ago
  • AWS Data Architect

    Fractal 4.2company rating

    Fremont, CA jobs

    Fractal is a strategic AI partner to Fortune 500 companies with a vision to power every human decision in the enterprise. Fractal is building a world where individual choices, freedom, and diversity are the greatest assets; an ecosystem where human imagination is at the heart of every decision. Where no possibility is written off, only challenged to get better. We believe that a true Fractalite is the one who empowers imagination with intelligence. Fractal has been featured as a Great Place to Work by The Economic Times in partnership with the Great Place to Work Institute and recognized as a ‘Cool Vendor' and a ‘Vendor to Watch' by Gartner. Please visit Fractal | Intelligence for Imagination for more information about Fractal. Fractal is looking for a proactive and driven AWS Lead Data Architect/Engineer to join our cloud and data tech team. In this role, you will work on designing the system architecture and solution, ensuring the platform is scalable while performant, and creating automated data pipelines. Responsibilities: Design & Architecture of Scalable Data Platforms Design, develop, and maintain large-scale data processing architectures on the Databricks Lakehouse Platform to support business needs Architect multi-layer data models including Bronze (raw), Silver (cleansed), and Gold (curated) layers for various domains (e.g., Retail Execution, Digital Commerce, Logistics, Category Management). Leverage Delta Lake, Unity Catalog, and advanced features of Databricks for governed data sharing, versioning, and reproducibility. Client & Business Stakeholder Engagement Partner with business stakeholders to translate functional requirements into scalable technical solutions. Conduct architecture workshops and solutioning sessions with enterprise IT and business teams to define data-driven use cases Data Pipeline Development & Collaboration Collaborate with data engineers and data scientists to develop end-to-end pipelines using Python, PySpark, SQL Enable data ingestion from diverse sources such as ERP (SAP), POS data, Syndicated Data, CRM, e-commerce platforms, and third-party datasets. Performance, Scalability, and Reliability Optimize Spark jobs for performance tuning, cost efficiency, and scalability by configuring appropriate cluster sizing, caching, and query optimization techniques. Implement monitoring and alerting using Databricks Observability, Ganglia, Cloud-native tools Security, Compliance & Governance Design secure architectures using Unity Catalog, role-based access control (RBAC), encryption, token-based access, and data lineage tools to meet compliance policies. Establish data governance practices including Data Fitness Index, Quality Scores, SLA Monitoring, and Metadata Cataloging. Adoption of AI Copilots & Agentic Development Utilize GitHub Copilot, Databricks Assistant, and other AI code agents for Writing PySpark, SQL, and Python code snippets for data engineering and ML tasks. Generating documentation and test cases to accelerate pipeline development. Interactive debugging and iterative code optimization within notebooks. Advocate for agentic AI workflows that use specialized agents for Data profiling and schema inference. Automated testing and validation. Innovation and Continuous Learning Stay abreast of emerging trends in Lakehouse architectures, Generative AI, and cloud-native tooling. Evaluate and pilot new features from Databricks releases and partner integrations for modern data stack improvements. Requirements: Bachelor's or master's degree in computer science, Information Technology, or a related field. 8-12 years of hands-on experience in data engineering, with at least 5+ years on Python and Apache Spark. Expertise in building high-throughput, low-latency ETL/ELT pipelines on AWS/Azure/GCP using Python, PySpark, SQL. Excellent hands on experience with workload automation tools such as Airflow, Prefect etc. Familiarity with building dynamic ingestion frameworks from structured/unstructured data sources including APIs, flat files, RDBMS, and cloud storage Experience designing Lakehouse architectures with bronze, silver, gold layering. Strong understanding of data modelling concepts, star/snowflake schemas, dimensional modelling, and modern cloud-based data warehousing. Experience with designing Data marts using Cloud data warehouses and integrating with BI tools (Power BI, Tableau, etc.). Experience CI/CD pipelines using tools such as AWS Code commit, Azure DevOps, GitHub Actions. Knowledge of infrastructure-as-code (Terraform, ARM templates) for provisioning platform resources In-depth experience with AWS Cloud services such as Glue, S3, Redshift etc. Strong understanding of data privacy, access controls, and governance best practices. Experience working with RBAC, tokenization, and data classification frameworks Excellent communication skills for stakeholder interaction, solution presentations, and team coordination. Proven experience leading or mentoring global, cross-functional teams across multiple time zones and engagements. Ability to work independently in agile or hybrid delivery models, while guiding junior engineers and ensuring solution quality Must be able to work in PST time zone. Pay: The wage range for this role takes into account the wide range of factors that are considered in making compensation decisions including but not limited to skill sets; experience and training; licensure and certifications; and other business and organizational needs. The disclosed range estimate has not been adjusted for the applicable geographic differential associated with the location at which the position may be filled. At Fractal, it is not typical for an individual to be hired at or near the top of the range for their role and compensation decisions are dependent on the facts and circumstances of each case. A reasonable estimate of the current range is: $150k - $180k. In addition, you may be eligible for a discretionary bonus for the current performance period. Benefits: As a full-time employee of the company or as an hourly employee working more than 30 hours per week, you will be eligible to participate in the health, dental, vision, life insurance, and disability plans in accordance with the plan documents, which may be amended from time to time. You will be eligible for benefits on the first day of employment with the Company. In addition, you are eligible to participate in the Company 401(k) Plan after 30 days of employment, in accordance with the applicable plan terms. The Company provides for 11 paid holidays and 12 weeks of Parental Leave. We also follow a “free time” PTO policy, allowing you the flexibility to take the time needed for either sick time or vacation. Fractal provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
    $150k-180k yearly 22h ago
  • Data Architect

    KPI Partners 4.8company rating

    Plano, TX jobs

    KPI Partners is a 5 times Gartner-recognized data, analytics, and AI consulting company. We are leaders in data engineering on Azure, AWS, Google, Snowflake, and Databricks. Founded in 2006, KPI has over 400 consultants and has successfully delivered over 1,000 projects to our clients. We are looking for skilled data engineers who want to work with the best team in data engineering. Title: Senior Data Architect Location: Plano, TX (Hybrid) Job Type: Contract - 6 Months Key Skills: SQL, PySpark, Databricks, and Azure Cloud Key Note: Looking for a Data Architect who is Hands-on with SQL, PySpark, Databricks, and Azure Cloud. About the Role: We are seeking a highly skilled and experienced Senior Data Architect to join our dynamic team at KPI, working on challenging and multi-year data transformation projects. This is an excellent opportunity for a talented data engineer to play a key role in building innovative data solutions using Azure Native Services and related technologies. If you are passionate about working with large-scale data systems and enjoy solving complex engineering problems, this role is for you. Key Responsibilities: Data Engineering: Design, development, and implementation of data pipelines and solutions using PySpark, SQL, and related technologies. Collaboration: Work closely with cross-functional teams to understand business requirements and translate them into robust data solutions. Data Warehousing: Design and implement data warehousing solutions, ensuring scalability, performance, and reliability. Continuous Learning: Stay up to date with modern technologies and trends in data engineering and apply them to improve our data platform. Mentorship: Provide guidance and mentorship to junior data engineers, ensuring best practices in coding, design, and development. Must-Have Skills & Qualifications: Minimum 12+ years of overall experience in IT Industry. 4+ years of experience in data engineering, with a strong background in building large-scale data solutions. 4+ years of hands-on experience developing and implementing data pipelines using Azure stack experience (Azure, ADF, Databricks, Functions) Proven expertise in SQL for querying, manipulating, and analyzing large datasets. Strong knowledge of ETL processes and data warehousing fundamentals. Self-motivated and independent, with a “let's get this done” mindset and the ability to thrive in a fast-paced and dynamic environment. Good-to-Have Skills: Databricks Certification is a plus. Data Modeling, Azure Architect Certification.
    $88k-123k yearly est. 4d ago
  • Data Analyst

    The Intersect Group 4.2company rating

    Irving, TX jobs

    Job Title: Marketing & Merchandise Analyst - C-Shopper **This position is a 9 month contract opportunity that cannot support C2C or any form of sponsorship** The Marketing & Merchandise Analyst will work across various C-Shopper development initiatives, partnering with the C-Shopper team, internal data teams, and Circana/IRI personnel. This role focuses on driving adoption and impact of the C-Shopper Customer Insights platform among internal and external users, delivering actionable insights to improve decision-making and business performance. Key Responsibilities: Platform Development & Adoption Assist in C-Shopper platform enhancements to maximize value for internal and external stakeholders. Act as a subject matter expert (SME) and Customer Success resource for the C-Shopper team. Drive internal adoption of Customer Insights tools across Marketing, Merchandising, Loyalty, Operations, and Finance teams. User Engagement & Training Coordinate and conduct onsite and virtual meetings with internal teams. Deliver training sessions and provide Help Desk support for assigned user groups. Initiate ongoing interactions with user groups to share insights and best practices. Analytics & Insights Delivery Produce analytics projects and presentations to support internal and external business needs. Provide guidance and case studies demonstrating high-value insights for user groups. Partner with user teams to act as the voice of the customer, influencing customer-centric strategies. Customer Success & Support Manage onboarding and ongoing support strategies for internal users. Support external supplier projects with ad hoc analytics and presentations. Define and track metrics for program impact, customer satisfaction, and platform usage. Continuous Improvement Anticipate and remove barriers to project success. Conduct evaluations and gather feedback from user groups to improve adoption. Monitor market and customer trends to enhance user experience and operational excellence. Qualifications: Strong analytical and problem-solving skills. Excellent communication and presentation abilities. Ability to manage multiple projects and collaborate across teams. Familiarity with customer insights platforms and retail analytics preferred.
    $65k-88k yearly est. 22h ago
  • Data Engineer - AI & Data Modernization

    Guidehouse 3.7company rating

    San Antonio, TX jobs

    Job Family: Data Science Consulting Travel Required: Up to 25% Clearance Required: Ability to Obtain Public Trust What You Will Do: We are seeking an experienced Data Engineer to join our growing AI and Data practice, with a dedicated focus on the Federal Civilian Agencies (FCA) market within the Communities, Energy & Infrastructure (CEI) segment. This individual will be a hands-on technical contributor, responsible for designing and implementing scalable data pipelines and interactive dashboards that enable federal clients to achieve mission outcomes, operational efficiency, and digital transformation. This is a strategic delivery role for someone who thrives at the intersection of data engineering, cloud platforms, and public sector analytics. Client Leadership & Delivery Collaborate with FCA clients to understand data architecture and reporting needs. Lead the development of ETL pipelines and dashboard integrations using Databricks and Tableau. Ensure delivery excellence and measurable outcomes across data migration and visualization efforts. Solution Development & Innovation Design and implement scalable ETL/ELT pipelines using Spark, SQL, and Python. Develop and optimize Tableau dashboards aligned with federal reporting standards. Apply AI/ML tools to automate metadata extraction, clustering, and dashboard scripting. Practice & Team Leadership Work closely with data architects, analysts, and cloud engineers to deliver integrated solutions. Support documentation, testing, and deployment of data products. Mentor junior developers and contribute to reusable frameworks and accelerators. What You Will Need: US Citizenship is required Bachelor's degree is required Minimum TWO (2) years of experience in data engineering and dashboard development Proven experience with Databricks, Tableau, and cloud platforms (AWS, Azure) Strong proficiency in SQL, Python, and Spark Experience building ETL pipelines and integrating data sources into reporting platforms Familiarity with data governance, metadata, and compliance frameworks Excellent communication, facilitation, and stakeholder engagement skills What Would Be Nice To Have: AI/LLM Certifications Experience working with FCA clients such as DOT, GSA, USDA, or similar Familiarity with federal contracting and procurement processes What We Offer: Guidehouse offers a comprehensive, total rewards package that includes competitive compensation and a flexible benefits package that reflects our commitment to creating a diverse and supportive workplace. Benefits include: Medical, Rx, Dental & Vision Insurance Personal and Family Sick Time & Company Paid Holidays Position may be eligible for a discretionary variable incentive bonus Parental Leave and Adoption Assistance 401(k) Retirement Plan Basic Life & Supplemental Life Health Savings Account, Dental/Vision & Dependent Care Flexible Spending Accounts Short-Term & Long-Term Disability Student Loan PayDown Tuition Reimbursement, Personal Development & Learning Opportunities Skills Development & Certifications Employee Referral Program Corporate Sponsored Events & Community Outreach Emergency Back-Up Childcare Program Mobility Stipend About Guidehouse Guidehouse is an Equal Opportunity Employer-Protected Veterans, Individuals with Disabilities or any other basis protected by law, ordinance, or regulation. Guidehouse will consider for employment qualified applicants with criminal histories in a manner consistent with the requirements of applicable law or ordinance including the Fair Chance Ordinance of Los Angeles and San Francisco. If you have visited our website for information about employment opportunities, or to apply for a position, and you require an accommodation, please contact Guidehouse Recruiting at ************** or via email at RecruitingAccommodation@guidehouse.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodation. All communication regarding recruitment for a Guidehouse position will be sent from Guidehouse email domains including @guidehouse.com or ************************. Correspondence received by an applicant from any other domain should be considered unauthorized and will not be honored by Guidehouse. Note that Guidehouse will never charge a fee or require a money transfer at any stage of the recruitment process and does not collect fees from educational institutions for participation in a recruitment event. Never provide your banking information to a third party purporting to need that information to proceed in the hiring process. If any person or organization demands money related to a job opportunity with Guidehouse, please report the matter to Guidehouse's Ethics Hotline. If you want to check the validity of correspondence you have received, please contact *************************. Guidehouse is not responsible for losses incurred (monetary or otherwise) from an applicant's dealings with unauthorized third parties. Guidehouse does not accept unsolicited resumes through or from search firms or staffing agencies. All unsolicited resumes will be considered the property of Guidehouse and Guidehouse will not be obligated to pay a placement fee.
    $68k-94k yearly est. Auto-Apply 22h ago
  • Lead Data Architect

    Interactive Resources-IR 4.2company rating

    Tempe, AZ jobs

    We are seeking a Lead Data Architect to drive the design and implementation of our enterprise data architecture with a focus on Azure Data Lake, Databricks, and Lakehouse architecture. This role will serve as the data design authority, ensuring alignment with enterprise standards while enabling business value through scalable, high-quality data solutions. The ideal candidate will have a proven track record in financial services or wealth management, deep expertise in data modeling and MDM (e.g., Profisee), and experience architecting cloud-native data platforms that support analytics, AI/ML, and regulatory/compliance requirements. Key Responsibilities Define and own the enterprise data architecture strategy, standards, and patterns. Lead the design and implementation of Azure-based Lakehouse architecture leveraging Azure Data Lake, Databricks, Delta Lake, and related services. Serve as the data design authority, governing data models, integration patterns, metadata management, and data quality standards. Architect and implement Master Data Management (MDM) solutions, preferably with Profisee. Collaborate with stakeholders, engineers, and analysts to translate business requirements into scalable architecture and data models. Ensure alignment with data governance, security, and compliance frameworks. Provide technical leadership in data design, ETL/ELT best practices, and performance optimization. Partner with enterprise and solution architects to integrate data architecture with application and cloud strategies. Mentor and guide data engineers and modelers, fostering a culture of engineering and architecture excellence. Required Qualifications 10+ years of experience in data architecture, data engineering, or related fields, with 5+ years in a lead/architect capacity. Strong expertise in Azure Data Lake, Databricks, Delta Lake, and Lakehouse architecture. Hands-on experience architecting and implementing MDM solutions (Profisee strongly preferred). Deep knowledge of data modeling (conceptual, logical, physical) and metadata management. Experience as a data design authority across enterprise programs. Strong understanding of financial services data domains (clients, accounts, portfolios, products, transactions) and regulatory needs. Proficiency in SQL, Python, Spark, and modern ELT/ETL tools. Familiarity with data governance, lineage, cataloging, and data quality tools. Excellent communication and leadership skills to engage with senior business and technology stakeholders. Preferred Qualifications Experience with real-time data streaming (Kafka, Event Hub). Exposure to BI/Analytics platforms (Power BI, Tableau) integrated with Lakehouse. Knowledge of data security and privacy frameworks in financial services. Cloud certification in Microsoft Azure Data Engineering/Architecture. Benefits Comprehensive health, vision, and dental coverage. 401(k) plans plus a variety of voluntary plans such as legal services, insurance, and more. 👉 If you're a data architecture leader who thrives on building scalable, cloud-native data platforms and want to make an impact in financial services, we'd love to connect.
    $90k-118k yearly est. 22h ago
  • System Integration Architect

    Mindlance 4.6company rating

    Chicago, IL jobs

    Client : Airlines/Aerospace/Aviation Title : Workday Integration Architect/System Integration Architect/Integration Architect/System Architect Duration : 12 Months : Top 3 skill sets required for this role: 1. Service Now HR vertical technical and functional skills 2. Workday and third-party integrations to Service Now 3. Communication and Stakeholder management Nice to have skills or certifications: 1. Experience with AI/SNOW Virtual Agents 2. Employee portal design and development 3. Conflict Management Job Description: 2 phases - service now to work day migration -ideally complete by 2026 - Deep technical skills with Service now and Workday - Strong communication - they are in HR space- stakeholder management and explain concepts to people - Team environment- but they need to drive the decisions - - Day-to-day- development skills - Purpose of role - new project - augment the architect skills - system implementation partners for integrations - cloud background and 2 platform backgrounds - mid range or senior - who has done this before - someone who can advise what to look out for - they will be working on new employee portal - will continue to advance (phase 2). Service now in the HR vertical, integrations, AI and search (how to maximize), communication skills, time management, ok - without Workday but other migrations. Mindlance is an Equal Opportunity Employer and does not discriminate in employment on the basis of - Minority/Gender/Disability/Religion/LGBTQI/Age/Veterans.
    $87k-120k yearly est. 2d ago
  • SAP PP/MES Consultant / Architect

    Avance Consulting 4.4company rating

    Fort Worth, TX jobs

    SAP PP/MES consultant with experience range 12 - 20 years. Configuration expert in SAP PP/PEO/QM modules. Knowledge on ‘product rework/rework from stock' & MRO process would be essential. Industry experience of aerospace & defense or automobile would be nice to have. Exposure to the FIORI applications and S4HANA application would be preferred. Hands on with PP-MM, PP-QM integration. Experience in complex assembly manufacturing preferred. Capable of requirement gathering, building functional specifications. Guide ABAP developer on the solution and test with business acumen. Sum up the business requirement into the business process and help breakdown those into the items to be built from development perspective. At least 2 - 3 SAP implementation project experience preferred in both production planning & Quality module. SAP consultant with knowledge on the Parameter Unit effectivity would be a bonus (aircraft build). Hands on experience on material requirement planning and experience with the project stock with involvement of WBS elements. Theoretical knowledge on the SAP project systems (Network orders, WBS elements & PP-PS integration via MA & IKs). Must be results oriented, and demonstrate a can-do attitude - adaptability, flexibility, and resourcefulness
    $83k-128k yearly est. 1d ago

Learn more about Link Technologies jobs