Post job

Data Architect jobs at Link Technologies

- 3060 jobs
  • Data Modeler II

    Airswift 4.9company rating

    Houston, TX jobs

    Job Title: Data Modeler II Type: W2 Contract (USA)/INC or T4 (Canada) Work Setup: Hybrid (On-site with flexibility to work from home two days per week) Industry: Oil & Gas Benefits: Health, Dental, Vision Job Summary We are seeking a Data Modeler II with a product-driven, innovative mindset to design and implement data solutions that deliver measurable business value for Supply Chain operations. This role combines technical expertise with project management responsibilities, requiring collaboration with IT teams to develop solutions for small and medium-sized business challenges. The ideal candidate will have hands-on experience with data transformation, AI integration, and ERP systems, while also being able to communicate technical concepts in clear, business-friendly language. Key Responsibilities Develop innovative data solutions leveraging knowledge of Supply Chain processes and oil & gas industry value drivers. Design and optimize ETL pipelines for scalable, high-performance data processing. Integrate solutions with enterprise data platforms and visualization tools. Gather and clean data from ERP systems for analytics and reporting. Utilize AI tools and prompt engineering to enhance data-driven solutions. Collaborate with IT and business stakeholders to deliver medium and low-level solutions for local issues. Oversee project timelines, resources, and stakeholder engagement. Document project objectives, requirements, and progress updates. Translate technical language into clear, non-technical terms for business users. Support continuous improvement and innovation in data engineering and analytics. Basic / Required Qualifications Bachelor's degree in Commerce (SCM), Data Science, Engineering, or related field. Hands-on experience with: Python for data transformation. ETL tools (Power Automate, Power Apps; Databricks is a plus). Oracle Cloud (Supply Chain and Financial modules). Knowledge of ERP systems (Oracle Cloud required; SAP preferred). Familiarity with AI integration and low-code development platforms. Strong understanding of Supply Chain processes; oil & gas experience preferred. Ability to manage projects and engage stakeholders effectively. Excellent communication skills for translating technical concepts into business language. Required Knowledge / Skills / Abilities Advanced proficiency in data science concepts, including statistical analysis and machine learning. Experience with prompt engineering and AI-driven solutions. Ability to clean and transform data for analytics and reporting. Strong documentation, troubleshooting, and analytical skills. Business-focused mindset with technical expertise. Ability to think outside the box and propose innovative solutions. Special Job Characteristics Hybrid work schedule (Wednesdays and Fridays remote). Ability to work independently and oversee own projects.
    $82k-115k yearly est. 1d ago
  • Data Modeler

    Airswift 4.9company rating

    Midland, TX jobs

    Job Title: Data Modeler - Net Zero Program Analyst Type: W2 Contract (12-month duration) Work Setup: On-site Industry: Oil & Gas Benefits: Dental, Healthcare, Vision &401(k) Airswift is seeking a Data Modeler - Net Zero Program Analyst to join one of our major clients on a 12-month contract. This newly created role supports the company's decarbonization and Net Zero initiatives by managing and analyzing operational data to identify trends and optimize performance. The position involves working closely with operations and analytics teams to deliver actionable insights through data visualization and reporting. Responsibilities: Build and maintain Power BI dashboards to monitor emissions, operational metrics, and facility performance. Extract and organize data from systems such as SiteView, ProCount, and SAP for analysis and reporting. Conduct data validation and trend analysis to support sustainability and operational goals. Collaborate with field operations and project teams to interpret data and provide recommendations. Ensure data consistency across platforms and assist with integration efforts (coordination only, no coding required). Present findings through clear reports and visualizations for technical and non-technical stakeholders. Required Skills and Experience: 7+ years of experience in data analysis within Oil & Gas or Energy sectors. Strong proficiency in Power BI (required). Familiarity with SiteView, ProCount, and/or SAP (preferred). Ability to translate operational data into insights that support emissions reduction and facility optimization. Experience with surface facilities, emissions estimation, or power systems. Knowledge of other visualization tools (Tableau, Spotfire) is a plus. High School Diploma or GED required. Additional Details: Preference for Midland-based candidates; Houston-based candidates will need to travel to Midland periodically (travel reimbursed). No per diem offered. Office-based role with low exposure risk.
    $83k-116k yearly est. 3d ago
  • SAP Public Cloud Data Management

    The Planet Group 4.1company rating

    New York, NY jobs

    Manager, SAP Public Cloud Data Salary Range: $135,000 - $218,000 Introduction We're seeking an experienced SAP data conversion leader to join a rapidly growing Advisory practice at a leading professional services firm. This role is perfect for a strategic thinker who thrives on complex data challenges and wants to make a significant impact on large-scale SAP S/4HANA Public Cloud implementations. You'll lead the entire data conversion workstream, develop innovative solutions, and mentor teams while working with enterprise clients on their digital transformation journeys. If you're looking for a firm that prioritizes professional growth, offers world-class training, and values collaboration, this is an exceptional opportunity to advance your career. Required Skills & Qualifications Minimum 5 years of experience in SAP data conversion and governance Must have experience for a Big 4 At least one full lifecycle SAP S/4HANA Public Cloud implementation with direct involvement in scoping and designing the data workstream during the sales pursuit phase Bachelor's degree from an accredited college or university in an appropriate field Proven expertise in developing and executing end-to-end data conversion strategies, including legacy landscape assessment, source-to-target mapping, and data governance framework design Demonstrated success managing complete data conversion workstreams within large-scale SAP programs, including planning, risk mitigation, issue resolution, and budget oversight Strong technical command of data architecture principles with hands-on experience designing ETL pipelines and leading full data migration lifecycles from mock cycles through final cutover Ability to travel 50-80% Must be authorized to work in the U.S. without the need for employment-based visa sponsorship now or in the future Preferred Skills & Qualifications Experience with SAP BTP (Business Technology Platform) and Datasphere for data orchestration Track record of developing reusable ETL templates, automation scripts, and governance accelerators Experience supporting sales pursuits by providing data conversion scope, solution design, and pricing input Strong leadership and mentoring capabilities with data-focused teams Day-to-Day Responsibilities Develop and own comprehensive data conversion strategies, assessing legacy landscapes, defining source-to-target mapping, establishing cleansing protocols, and designing data governance frameworks Lead the data conversion workstream within SAP S/4HANA programs, managing project plans, budgets, and financials while proactively identifying and resolving risks and issues Design and oversee data conversion architecture, including ETL pipelines, staging strategies, and validation protocols Execute hands-on the full data conversion lifecycle, including ETL design, multiple mock cycles, data validation, and final cutover, ensuring alignment with program milestones Support pursuit directors during sales cycles by providing expert input into data conversion scope, solution design, and pricing Lead and mentor data conversion teams by assigning tasks, managing delivery quality, and fostering a collaborative culture Drive efficiency through the development of reusable templates and automation accelerators for future projects Company Benefits & Culture Comprehensive, competitive benefits package including medical, dental, and vision coverage 401(k) plans with company contributions Disability and life insurance Robust personal well-being benefits supporting mental health Personal Time Off based on job classification and years of service Two annual breaks where PTO is not required (year-end and July 4th holiday period) World-class training facility and leading market tools Continuous learning and career development opportunities Collaborative, team-driven culture where you can be your whole self Fast-growing practice with abundant advancement opportunities Note: This position does not offer visa sponsorship (H-1B, L-1, TN, O-1, E-3, H-1B1, F-1, J-1, OPT, CPT or any other employment-based visa). #TECH
    $73k-103k yearly est. 2d ago
  • Data Analyst

    Mastech Digital 4.7company rating

    Newark, NJ jobs

    Title: AWS Data Analyst Duration: 6+ Months (with extension) Rate: $50-51/hour on W2 The ideal candidate will have strong analytical abilities, proficiency in tools like SQL, Excel, and Python, and excellent communication skills to translate complex data into actionable business strategies. Required: Bachelor's degree in Computer Science, Data Engineering, or related field with 5+ years of experience in data analyst roles Proficiency in SQL, Python, or Scala for data transformation and processing. Working Knowledge of AWS services Proven experience in data analysis, business intelligence, or related roles. Strong analytical and problem-solving skills with attention to detail. Excellent communication skills Ability to work independently and collaboratively in a fast-paced environment. Key Responsibilities Extract, clean, and analyze large datasets from multiple sources. Write complex SQL queries to retrieve, manipulate, and analyze data efficiently. Develop and maintain dashboards and reports for business stakeholders. Work with AWS services such as S3, Redshift, Athena, and Glue for data processing and analysis. Collaborate with cross-functional teams to understand data requirements and provide actionable insights. Ensure data integrity, consistency, and security across various databases. Identify trends, anomalies, and opportunities in data to drive business decisions.
    $50-51 hourly 1d ago
  • Data Analyst

    Green Key Resources 4.6company rating

    New York, NY jobs

    The Data Analyst will join the Knowledge & Innovation group and serve as a key contributor to the development and deployment of data and AI-driven solutions. This role focuses on applying analytics, machine learning, and large language models to deliver insights that support both legal teams and business functions. Working closely with cross-functional stakeholders and technology partners, the Data Analyst will help design and implement modern data solutions that improve efficiency, elevate internal client service, and support informed decision-making. This position offers the opportunity to help shape how advanced data and AI capabilities are adopted within a leading legal organization. Key Responsibilities Partner with attorneys, practice teams, and business stakeholders to understand challenges and identify opportunities for data- and AI-enabled process improvements. Convert complex legal concepts and workflows into clearly defined data models and AI applications. Architect, develop, and support scalable data pipelines and analytics platforms that power reporting, business intelligence, and AI initiatives. Perform data analysis and modeling using tools such as SQL, Python, R, and related technologies to uncover trends and insights. Work alongside internal technology teams to enhance and maintain core data assets, including databases and data warehouses. Build processes and tools that transform raw data into accessible, intuitive datasets that support self-service reporting and analytics. Leverage advanced analytical techniques, including machine learning and natural language processing, to identify patterns, relationships, and predictive insights. Create and deploy analytical solutions for a range of use cases, including text analysis, forecasting, and trend analysis. Develop, test, and optimize prompts for large language models to support legal research, drafting, and knowledge management workflows. Deliver end-to-end data and AI solutions, from initial concept and prototyping through production implementation. Monitor emerging trends in data science and artificial intelligence, incorporating new methodologies and technologies where they add value. Skills & Qualifications Bachelor's degree in Data Science, Computer Science, Engineering, or a related discipline required Advanced degree preferred, particularly with a focus on deep learning, NLP, or information retrieval At least 3 years of relevant professional experience, including a minimum of 2 years in data engineering and/or data science roles Demonstrated experience in data engineering, analytics, and data modeling Strong command of Python, R, and SQL Practical experience with machine learning, NLP, and data visualization tools Ability to clearly communicate technical concepts to non-technical audiences Prior experience in a legal, consulting, or professional services environment is a plus
    $64k-92k yearly est. 1d ago
  • Data Architect

    Optech 4.6company rating

    Cincinnati, OH jobs

    THIS IS A W2 (NOT C2C OR REFERRAL BASED) CONTRACT OPPORTUNITY REMOTE MOSTLY WITH 1 DAY/MO ONSITE IN CINCINNATI-LOCAL CANDIDATES TAKE PREFERENCE RATE: $75-85/HR WITH BENEFITS We are seeking a highly skilled Data Architect to function in a consulting capacity to analyze, redesign, and optimize a Medical Payments client's environment. The ideal candidate will have deep expertise in SQL, Azure cloud services, and modern data architecture principles. Responsibilities Design and maintain scalable, secure, and high-performing data architectures. Lead migration and modernization projects in heavy use production systems. Develop and optimize data models, schemas, and integration strategies. Implement data governance, security, and compliance standards. Collaborate with business stakeholders to translate requirements into technical solutions. Ensure data quality, consistency, and accessibility across systems. Required Qualifications Bachelor's degree in Computer Science, Information Systems, or related field. Proven experience as a Data Architect or similar role. Strong proficiency in SQL (query optimization, stored procedures, indexing). Hands-on experience with Azure cloud services for data management and analytics. Knowledge of data modeling, ETL processes, and data warehousing concepts. Familiarity with security best practices and compliance frameworks. Preferred Skills Understanding of Electronic Health Records systems. Understanding of Big Data technologies and modern data platforms outside the scope of this project.
    $75-85 hourly 2d ago
  • Data Architect

    Mastech Digital 4.7company rating

    Dallas, TX jobs

    Primary responsibilities of the Senior Data Architect include designing and managing Data Architectural solutions for multiple environments including but not limited to Data Warehouse, ODS, Data Replication/ETL Data Management initiatives. The candidate will be in an expert role and will work closely with Business, DBA, ETL and Data Management teams providing solutions for complex Data related initiatives. This individual will also be responsible for developing and managing Data Governance and Master Data Management solutions. This candidate must have good technical and communication skills coupled with the ability to mentor effectively. Responsibilities Establishing policies, procedures and guidelines regarding all aspects of Data Governance Ensure data decisions are consistent, and best practices are adhered to Ensure Data Standardization definitions, Data Dictionary and Data Lineage are kept up to date and accessible Work with ETL, Replication and DBA teams to determine best practices as it relates to data transformations, data movement and derivations Work with support teams to ensure consistent and pro-active support methodologies are in place for all aspects of data movements and data transformations Work with and mentor Data Architects and Data Analysts to ensure best practices are adhered to for database design and data management Assist in overall Architectural solutions including, but not limited to Data Warehouse, ODS, Data Replication/ETL Data Management initiatives Work with the business teams and Enterprise Architecture team to ensure best architectural solutions from a Data perspective Create a strategic roadmap for MDM implementation Responsible for implementing a Master Data Management tool Establishing policies, procedures and guidelines regarding all aspects of Master Data Management Ensure Architectural rules and design of the MDM process are documented and best practices are adhered to Qualifications 5+ years of Data Architecture experience, including OLTP, Data Warehouse, Big Data 5+ years of Solution Architecture experience 5+ years of MDM experience 5+ years of Data Governance experience, working knowledge of best practices Extensive working knowledge of all aspects of Data Movement and Processing, including Middleware, ETL, API, OLAP and best practices for data tracking Good Communication skills Self-Motivated Capable of presenting to all levels of audiences Works well in a team environment
    $93k-120k yearly est. 3d ago
  • Data Architect

    Green Key Resources 4.6company rating

    New York, NY jobs

    Data Solutions Architect The Data Solutions Architect will play a pivotal role in advancing organizational data and artificial intelligence (AI) initiatives. Leveraging statistical analysis, machine learning (ML), and large language models (LLMs), this role focuses on extracting insights and supporting decision-making across diverse business operations and professional service practices. The architect will collaborate with innovation teams, technical resources, and stakeholders to design and implement data-driven solutions that enhance service delivery and operational efficiency. Staying current with emerging technologies and best practices, the Data Solutions Architect will integrate cutting-edge techniques into projects, offering a unique opportunity to shape the future of data and AI within the professional services sector. Principal Duties and Responsibilities Partner with operational and practice teams to identify challenges and opportunities for workflow improvement. Translate complex domain logic into actionable data requirements and AI use cases. Design, build, and maintain scalable data pipelines and infrastructure to support AI and BI initiatives. Utilize SQL, Python, R, and other analytics tools to analyze, model, and visualize data trends. Collaborate with technology teams to refine and maintain data pipelines, warehouses, and databases. Develop tools and processes to transform raw data into user-friendly formats for self-service analytics. Apply advanced quantitative methods, including ML and NLP, to identify patterns and build predictive models. Design and deploy systems for applications such as text analysis, trend analysis, and predictive modeling. Craft, test, and refine prompts for LLMs to generate contextually accurate outputs tailored to research and drafting workflows. Deliver AI-driven solutions from proof of concept through production, addressing cross-functional and practice-specific needs. Continuously monitor advancements in AI, ML, and data science, integrating innovative technologies into organizational projects. Job Specifications Required Education Bachelor's degree in Data Science, Computer Science, Engineering, or related fields. Preferred Education Master's degree in a relevant discipline; coursework in deep learning, NLP, or information retrieval is highly valued. Required Experience Minimum of 3 years of relevant experience, including at least 2 years in data engineering and data science roles. Competencies Demonstrated expertise in data analytics and engineering with a strong focus on data modeling. Proficiency in statistical programming languages (Python, R) and database management (SQL). Hands-on experience with ML, NLP, and data visualization tools. Strong problem-solving and communication skills, with the ability to present complex data to non-technical audiences. Experience in professional services or related environments preferred.
    $106k-149k yearly est. 16h ago
  • Senior Enterprise Data Architect

    Nam Info Inc. 4.3company rating

    Cranbury, NJ jobs

    Full Time Position Onsite at Cranbury NJ About NAM NAM Info is an IT application and implementation services entity having its US HQ in Cranbury, New Jersy and Development centers HQ in Bangalore in India. NAM's distinctive service line offerings include Professional Services, Managed Services - re-engineering, modernization largely involving emerging technology. NAM is also home to a next-generation Data Intelligence Platform that enables enterprises to automate and accelerate their journey from data to insights. Our platform simplifies and unifies data engineering, governance, and analytics-empowering organizations to achieve end-to-end data intelligence at scale. As we expand our product capabilities, we are seeking a Data Product Tester to help ensure that the Inferyx platform delivers world-class performance, accuracy, and reliability. About the Role As an Enterprise Data Architect will be responsible for driving end-to-end Data Lake implementation projects using Snowflake or Databricks on AWS / Azure / GCP platforms. You will lead cross-functional teams, manage delivery from design to deployment, and serve as the main point of contact for clients and internal stakeholders. This role demands hands-on technical experience, strong project management expertise, and a proven record of 3-4 full lifecycle Data Lake implementations. The ideal candidate should also bring pre-sales experience / ability to craft RFP/RFQ responses and Statements of Work (SOWs). Most preferred implementation experience, preferably, be data product based. Application based implementation will be “acceptable”. Key Responsibilities Design, develop, and maintain ETL/ELT pipelines using AWS services and Snowflake. End-to-End Delivery: Lead and execute 3-4 full lifecycle Data Lake implementations - from architecture design and development to deployment and post-go-live support. Architecture & Implementation: Design and build scalable, secure, and efficient Data Lake solutions using Snowflake or Databricks on AWS, following Medallion Architecture (Bronze, Silver, Gold layers). AWS Integration: Implement and optimize Data Lake solutions using AWS services such as S3, Glue, Lambda, CloudFormation, EC2, IAM, and Redshift where applicable. Pre-Sales & Client Engagement: Participate in pre-sales discussions, prepare technical proposals, respond to RFPs/RFQs, and draft Statements of Work (SOWs) in collaboration with sales and architecture teams. Leadership & Team Management: Lead a team of data engineers, architects, and analysts, ensuring high-quality deliverables, mentoring team members, and fostering a culture of excellence. Project Governance: Oversee planning, resource allocation, risk management, and execution to ensure projects are delivered on time, within scope, and with high customer satisfaction. Agile Delivery: Facilitate Agile ceremonies (daily stand-ups, sprint planning, retrospectives) and maintain delivery dashboards and KPIs. Data Quality & Compliance: Enforce best practices in data security, access controls, and data quality management across all implementations. Continuous Improvement: Drive innovation and process efficiency across delivery frameworks and technical practices. Mandatory Requirements Proven hands-on experience with Medallion Architecture. Experience delivering 3-4 full lifecycle Data Lake projects using Snowflake or Databricks on AWS. Strong AWS / Azure Cloud implementation experience with data integration and orchestration services. Proven experience in pre-sales, including RFP/RFQ responses and writing SOWs for clients. Solution scoping and conducting full-fledged POC's, including complete architectural workflow layout. Price scoping. Resource scoping. Deliverable milestone scoping. Required Skills & Experience 15-20 years of overall experience in data engineering, analytics, or cloud delivery roles. Should have been an enterprise Architect the recent five to seven years. Strong understanding of data platforms, ETL/ELT pipelines, data warehousing, and analytics lifecycle. Deep knowledge of Snowflake or Databricks architecture, performance tuning, and optimization. Hands-on proficiency in SQL, Python, or Unix Shell scripting. Sound knowledge of data security, access management, and governance frameworks. Excellent communication, stakeholder management, and presentation skills. Strong project management capabilities with Agile/Scrum or hybrid methodologies. Proven ability to manage multiple projects and cross-functional teams simultaneously. Strengthened Top 5 Skills: hands-on Databricks/Snowflake on AWS, Medallion Architecture, and automation (Python, PySpark, Terraform). Clarified Candidate Type: highlight that we're looking for a builder-leader hybrid, not a governance-only profile. Added Delivery & CI/CD depth: Include ownership of end-to-end lifecycle, pipeline automation, and DevOps practices (Airflow/Glue). Preferred Qualifications Certifications in AWS (Solution Architect / Data Analytics), Snowflake, or Databricks. Experience with CI/CD for data pipelines, Terraform/CloudFormation, and modern orchestration tools (e.g., Airflow, dbt). Familiarity with data catalog/governance tools such as Collibra, Alation, or AWS Glue Data Catalog. Kindly reply with your resume to Email- *****************
    $110k-148k yearly est. 2d ago
  • Data Architect

    KPI Partners 4.8company rating

    Plano, TX jobs

    KPI Partners is a 5 times Gartner-recognized data, analytics, and AI consulting company. We are leaders in data engineering on Azure, AWS, Google, Snowflake, and Databricks. Founded in 2006, KPI has over 400 consultants and has successfully delivered over 1,000 projects to our clients. We are looking for skilled data engineers who want to work with the best team in data engineering. Title: Senior Data Architect Location: Plano, TX (Hybrid) Job Type: Contract - 6 Months Key Skills: SQL, PySpark, Databricks, and Azure Cloud Key Note: Looking for a Data Architect who is Hands-on with SQL, PySpark, Databricks, and Azure Cloud. About the Role: We are seeking a highly skilled and experienced Senior Data Architect to join our dynamic team at KPI, working on challenging and multi-year data transformation projects. This is an excellent opportunity for a talented data engineer to play a key role in building innovative data solutions using Azure Native Services and related technologies. If you are passionate about working with large-scale data systems and enjoy solving complex engineering problems, this role is for you. Key Responsibilities: Data Engineering: Design, development, and implementation of data pipelines and solutions using PySpark, SQL, and related technologies. Collaboration: Work closely with cross-functional teams to understand business requirements and translate them into robust data solutions. Data Warehousing: Design and implement data warehousing solutions, ensuring scalability, performance, and reliability. Continuous Learning: Stay up to date with modern technologies and trends in data engineering and apply them to improve our data platform. Mentorship: Provide guidance and mentorship to junior data engineers, ensuring best practices in coding, design, and development. Must-Have Skills & Qualifications: Minimum 12+ years of overall experience in IT Industry. 4+ years of experience in data engineering, with a strong background in building large-scale data solutions. 4+ years of hands-on experience developing and implementing data pipelines using Azure stack experience (Azure, ADF, Databricks, Functions) Proven expertise in SQL for querying, manipulating, and analyzing large datasets. Strong knowledge of ETL processes and data warehousing fundamentals. Self-motivated and independent, with a “let's get this done” mindset and the ability to thrive in a fast-paced and dynamic environment. Good-to-Have Skills: Databricks Certification is a plus. Data Modeling, Azure Architect Certification.
    $88k-123k yearly est. 3d ago
  • Data Governance Lead - Data Architecture & Governance

    Addison Group 4.6company rating

    New York, NY jobs

    Job Title: Data Governance Lead - Data Architecture & Governance Employment Type: Full-Time Base Salary: $220K to $250K (based on experience) + Bonus is eligible for medical, dental, vision About the Role: We are seeking an Experienced Data Governance Lead to join a dynamic data and analytics team in New York. This role will design and oversee the organization's data governance framework, stewardship model, and data quality approach across financial services business lines, ensuring trusted and well-defined data for reporting and analytics across Databricks lakehouse, CRM, management reporting, data science teams, and GenAI initiatives. Primary Responsibilities: Design, implement, and refine enterprise-wide data governance framework, including policies, standards, and roles for data ownership and stewardship. Lead the design of data quality monitoring, dashboards, reporting, and exception-handling processes, coordinating remediation with stewards and technology teams. Drive communication and change management for governance policies and standards, making them practical and understandable for business stakeholders. Define governance processes for critical data domains (e.g., companies, contacts, funds, deals, clients, sponsors) to ensure consistency, compliance, and business value. Identify and onboard business data owners and stewards across business teams. Partner with Data Solution Architects and business stakeholders to align definitions, semantics, and survivorship rules, including support for DealCloud implementations. Define and prioritize data quality rules and metrics for key data domains. Develop training and onboarding materials for stewards and users to reinforce governance practices and improve reporting, risk management, and analytics outcomes. Qualifications: 6-8 years in data governance, data management, or related roles, preferably within financial services. Strong understanding of data governance concepts, including stewardship models, data quality management, and issue-resolution processes. Familiarity with CRM or deal management platforms (e.g., DealCloud, Salesforce) and modern data platforms (e.g., Databricks or similar). Proficiency in SQL for data investigation, ad hoc analysis, and validation of data quality rules. Comfortable working with Databricks, Jupyter notebooks, Excel, and BI tools. Python skills for automation, data wrangling, profiling, and validation are strongly preferred. Exposure to investment banking, equities, or private markets data is a plus. Excellent written and verbal communication skills with the ability to lead cross-functional discussions and influence senior stakeholders. Highly organized, proactive, and able to balance strategic governance framework design with hands-on execution.
    $220k-250k yearly 2d ago
  • AI Data Analyst

    Motion Recruitment 4.5company rating

    Dallas, TX jobs

    Motion recruitment has partnered with a retail ecommerce client and is seeking a Data Analyst specialized in Machine Learning and Artificial Intelligence. About the Role This role involves working closely with data to support the Machine Learning pipeline, requiring a strong foundation in statistical techniques and data analysis. Location: Onsite Duration: 12 months with possible extension Type: W-2 Contract Only - C2C, third-party, or sponsorship arrangements are not supported at this time. Interview: Onsite Local candidates are encouraged to apply as the job requires an onsite interview. Responsibilities Research, prototype, and build analysis and visualizations for our Machine Learning pipeline. Stay up to date with emerging technology and learn new technologies/libraries/frameworks. Learn and partner with peers across multiple disciplines, such as computer vision, machine learning, and systems design. Deliver on time with a high bar on quality of research, innovation, and engineering. Required Skills Strong knowledge of statistical techniques and advanced mathematics. 3+ years of data analyst/engineering/science within the Databricks ecosystem (Azure preferred). 5+ years of experience demonstrating the use of statistical techniques to analyze, segment and visualize data - specifically around experimental design, KPI calculation, and A/B testing. 4+ years of experience in manipulating big data using Python, PySpark, or SQL. Expert experience with data visualization tools in Python, PowerBI, etc. Preferred Skills Master's Degree or higher in Computer Science/Engineering/Math, or relevant experience. Experience working with Machine Learning models - evaluation, observability, and performance monitoring. Experience working closely with a business team to determine primary KPIs on an ambiguous problem. Equal Opportunity Statement We are committed to diversity and inclusivity in our hiring practices.
    $54k-85k yearly est. 3d ago
  • Senior Business Data Architect

    Vernovis 4.0company rating

    Cincinnati, OH jobs

    Job Title: Senior Business Data Architect Who we are: Vernovis is a Total Talent Solutions company that specializes in Technology, Cybersecurity, Finance & Accounting functions. At Vernovis, we help these professionals achieve their career goals, matching them with innovative projects and dynamic direct hire opportunities in Ohio and across the Midwest. Client Overview: Vernovis is partnering with a fast-paced manufacturing company that is looking to hire a Sr. Data Manager. This is a great opportunity for an experienced Snowflake professional to elevate their career leading a team and designing the architecture of the data warehouse. If interested, please email Wendy Kolkmeyer at *********************** What You'll Do: Architect and optimize the enterprise data warehouse using Snowflake. Develop and maintain automated data pipelines with Fivetran or similar ETL tools. Design and enhance DBT data models to support analytics, reporting, and operational decision-making. Oversee and improve Power BI reporting, ensuring data is accurate, accessible, and actionable for business users. Establish and enforce enterprise data governance, standards, policies, and best practices. Collaborate with business leaders to translate requirements into scalable, high-quality data solutions. Enable advanced analytics and AI/ML initiatives through proper data structuring and readiness. Drive cross-functional alignment, communication, and stakeholder engagement. Lead, mentor, and develop members of the data team. Ensure compliance, conduct system audits, and maintain business continuity plans. What Experience You'll Have: 7+ years of experience in data architecture, data engineering, or enterprise data management. Expertise in Snowflake, Fivetran (or similar ETL tools), DBT, and Power BI. Strong proficiency in SQL and modern data architecture principles. Proven track record in data governance, modeling, and data quality frameworks. Demonstrated experience leading teams and managing complex data initiatives. Ability to communicate technical concepts clearly and collaborate effectively with business stakeholders. What is Nice to Have: Manufacturing experience Vernovis does not accept inquiries from Corp to Corp recruiting companies. Applicants must be currently authorized to work in the United States on a full-time basis and not violate any immigration or discrimination laws. Vernovis provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state, or local laws. This policy applies to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation, and training.
    $90k-124k yearly est. 2d ago
  • Senior Data Analytics Engineer

    Revel It 4.3company rating

    Columbus, OH jobs

    We are seeking a highly skilled Analytics Data Engineer with deep expertise in building scalable data solutions on the AWS platform. The ideal candidate is a 10/10 expert in Python and PySpark, with strong working knowledge of SQL. This engineer will play a critical role in translating business and end-user needs into robust analytics products-spanning ingestion, transformation, curation, and enablement for downstream reporting and visualization. You will work closely with both business stakeholders and IT teams to design, develop, and deploy advanced data pipelines and analytical capabilities that power enterprise decision-making. Key Responsibilities Data Engineering & Pipeline Development Design, develop, and optimize scalable data ingestion pipelines using Python, PySpark, and AWS native services. Build end-to-end solutions to move large-scale big data from source systems into AWS environments (e.g., S3, Redshift, DynamoDB, RDS). Develop and maintain robust data transformation and curation processes to support analytics, dashboards, and business intelligence tools. Implement best practices for data quality, validation, auditing, and error-handling within pipelines. Analytics Solution Design Collaborate with business users to understand analytical needs and translate them into technical specifications, data models, and solution architectures. Build curated datasets optimized for reporting, visualization, machine learning, and self-service analytics. Contribute to solution design for analytics products leveraging AWS services such as AWS Glue, Lambda, EMR, Athena, Step Functions, Redshift, Kinesis, Lake Formation, etc. Cross-Functional Collaboration Work with IT and business partners to define requirements, architecture, and KPIs for analytical solutions. Participate in Daily Scrum meetings, code reviews, and architecture discussions to ensure alignment with enterprise data strategy and coding standards. Provide mentorship and guidance to junior engineers and analysts as needed. Engineering (Supporting Skills) Employ strong skills in Python, Pyspark and SQL to support data engineering tasks, broader system integration requirements, and application layer needs. Implement scripts, utilities, and micro-services as needed to support analytics workloads. Required Qualifications 5+ years of professional experience in data engineering, analytics engineering, or full-stack data development roles. Expert-level proficiency (10/10) in: Python PySpark Strong working knowledge of: SQL and other programming languages Demonstrated experience designing and delivering big-data ingestion and transformation solutions through AWS. Hands-on experience with AWS services such as Glue, EMR, Lambda, Redshift, S3, Kinesis, CloudFormation, IAM, etc. Strong understanding of data warehousing, ETL/ELT, distributed computing, and data modeling. Ability to partner effectively with business stakeholders and translate requirements into technical solutions. Strong problem-solving skills and the ability to work independently in a fast-paced environment. Preferred Qualifications Experience with BI/Visualization tools such as Tableau Experience building CI/CD pipelines for data products (e.g., Jenkins, GitHub Actions). Familiarity with machine learning workflows or MLOps frameworks. Knowledge of metadata management, data governance, and data lineage tools.
    $88k-120k yearly est. 16h ago
  • Senior Data Engineer

    Revel It 4.3company rating

    Columbus, OH jobs

    Our direct client has a long-term contract need for a Sr. Data Engineer. Candidate Requirements: Candidates must be local to Columbus, Ohio Candidates must be willing and able to work the following: Hybrid schedule (3 days in office & 2 days WFH) The team is responsible for the implementation of the new Contract Management System (FIS Asset Finance) as well as the integration into the overall environment and the migration of data from the legacy contract management system to the new system. Candidate will be focused on the delivery of data migration topics to ensure that high quality data is migrated from the legacy systems to the new systems. This may involve data mapping, SQL development and other technical activities to support Data Migration objectives. Must Have Experience: Strong C# and SQL Server design and development skills. Analysis Design. IMPORTANT MUST HAVE! Strong technical analysis skills Strong collaboration skills to work effectively with cross-functional teams Exceptional ability to structure, illustrate, and communicate complex concepts clearly and effectively to diverse audiences, ensuring understanding and actionable insights. Demonstrated adaptability and problem-solving skills to navigate challenges and uncertainties in a fast-paced environment. Strong prioritization and time management skills to balance multiple projects and deadlines in a dynamic environment. In-depth knowledge of Agile methodologies and practices, with the ability to adapt and implement Agile principles in testing and delivery processes. Nice to have: ETL design and development; data mapping skills and experience; experience executing/driving technical design and implementation topics
    $88k-120k yearly est. 16h ago
  • SAP Data Migration Developer

    Numeric Technologies 4.5company rating

    Englewood, NJ jobs

    SAP S4 Data Migration Developer Duration: 6 Months Rate: Competitive Market Rate This key role is responsible for development and configuration of SAP Data Services Platform with in Client's Corporate technology to deliver a successful data conversion and migration from SAP ECC to SAP S4 as part of project Keystone. KEY RESPONSIBILITIES - Responsible for SAP Data Services development, design, job creation and execution. Responsible for efficient design, performance tuning and ensuring timely data processing, validation & verification. Responsible for creating content within SAP Data Services for both master and transaction data conversion (standard SAP and Custom data objects). Responsible for data conversion using Staging tables and work with SAP teams for data loads in SAP S4 and MDG environments. Responsible for building validation rules, scorecards and data for consumption in Information Steward pursuant to conversion rules as per Functional Specifications. Responsible to adhere to project timelines, deliverables and account for object delivery for teams involved. To take part in meetings, execute plans, design, develop custom solutions within Clients O&T Engineering scope. Work in all facets of SAP Data Migration projects with focus on SAP S4 Data Migration using SAP Data Services Platform Hands-on development experience with ETL from legacy SAP ECC environment, conversions and jobs. Demonstrate capabilities with performance tuning, handling large data sets. Understand SAP tables, fields & load processes into SAP S4, MDG systems Build validation rules, customize, and deploy Information Steward scorecards, data reconciliation and validation Be a problem solver and build robust conversion, validation per requirements. SKILLS AND EXPERIENCE 6-8 years of experience in SAP Data Services application as a developer At least 2 SAP S4 Conversion projects with DMC, Staging Tables & updating SAP Master Data Governance Good communication skills, ability to deliver key objects on time and support with testing, mock cycles. 4-5 Years development experience in SAP Data Services 4.3 Designer, Information Steward Taking ownership and ensuring high quality results Active in seeking feedback and making necessary changes Specific previous experience - Proven experience in implementing SAP Data Services in a multinational environment. Experience in design of data loads of large volumes to SAP S4 from SAP ECC Must have used HANA Staging tables Experience in developing Information Steward for Data Reconciliation & Validation (not profiling) REQUIREMENTS Adhere to work availability schedule as noted above, be on time for meeting Written and verbal communication in English
    $78k-98k yearly est. 4d ago
  • Sr Oracle Database Developer

    HCL Global Systems Inc. 4.1company rating

    Jersey City, NJ jobs

    Special Instructions We are going to be hiring 5 Oracle Database Developers and can use this req to track them. Skills wise we need a Senior Level Oracle Database Developer, that is the number one skillset. They will also want someone who has SQL skills as mentioned above, don't need the ability to code in SQL, and query work. Would like some Python and Shell Scripting. Years of Exp: 8-10+, senior resource, need someone who can hit the ground running, no hand holding and is a team player.
    $91k-110k yearly est. 3d ago
  • Data Engineer

    Interactive Resources-IR 4.2company rating

    Tempe, AZ jobs

    About the Role We are seeking a highly skilled Databricks Data Engineer with strong expertise in modern data engineering, Azure cloud technologies, and Lakehouse architectures. This role is ideal for someone who thrives in dynamic environments, enjoys solving complex data challenges, and can lead end-to-end delivery of scalable data solutions. What We're Looking For 8+ years designing and delivering scalable data pipelines in modern data platforms Deep experience in data engineering, data warehousing, and enterprise-grade solution delivery Ability to lead cross-functional initiatives in matrixed teams Advanced skills in SQL, Python, and ETL/ELT development, including performance tuning Hands-on experience with Azure, Snowflake, and Databricks, including system integrations Key Responsibilities Design, build, and optimize large-scale data pipelines on the Databricks Lakehouse platform Modernize and enhance cloud-based data ecosystems on Azure, contributing to architecture, modeling, security, and CI/CD Use Apache Airflow and similar tools for workflow automation and orchestration Work with financial or regulated datasets while ensuring strong compliance and governance Drive best practices in data quality, lineage, cataloging, and metadata management Primary Technical Skills Develop and optimize ETL/ELT pipelines using Python, PySpark, Spark SQL, and Databricks Notebooks Design efficient Delta Lake models for reliability and performance Implement and manage Unity Catalog for governance, RBAC, lineage, and secure data sharing Build reusable frameworks using Databricks Workflows, Repos, and Delta Live Tables Create scalable ingestion pipelines for APIs, databases, files, streaming sources, and MDM systems Automate ingestion and workflows using Python and REST APIs Support downstream analytics for BI, data science, and application workloads Write optimized SQL/T-SQL queries, stored procedures, and curated datasets Automate DevOps workflows, testing pipelines, and workspace configurations Additional Skills Azure: Data Factory, Data Lake, Key Vault, Logic Apps, Functions CI/CD: Azure DevOps Orchestration: Apache Airflow (plus) Streaming: Delta Live Tables MDM: Profisee (nice-to-have) Databases: SQL Server, Cosmos DB Soft Skills Strong analytical and problem-solving mindset Excellent communication and cross-team collaboration Detail-oriented with a high sense of ownership and accountability
    $92k-122k yearly est. 3d ago
  • Data Engineer

    Interactive Resources-IR 4.2company rating

    Austin, TX jobs

    About the Role We are seeking a highly skilled Databricks Data Engineer with strong expertise in modern data engineering, Azure cloud technologies, and Lakehouse architectures. This role is ideal for someone who thrives in dynamic environments, enjoys solving complex data challenges, and can lead end-to-end delivery of scalable data solutions. What We're Looking For 8+ years designing and delivering scalable data pipelines in modern data platforms Deep experience in data engineering, data warehousing, and enterprise-grade solution delivery Ability to lead cross-functional initiatives in matrixed teams Advanced skills in SQL, Python, and ETL/ELT development, including performance tuning Hands-on experience with Azure, Snowflake, and Databricks, including system integrations Key Responsibilities Design, build, and optimize large-scale data pipelines on the Databricks Lakehouse platform Modernize and enhance cloud-based data ecosystems on Azure, contributing to architecture, modeling, security, and CI/CD Use Apache Airflow and similar tools for workflow automation and orchestration Work with financial or regulated datasets while ensuring strong compliance and governance Drive best practices in data quality, lineage, cataloging, and metadata management Primary Technical Skills Develop and optimize ETL/ELT pipelines using Python, PySpark, Spark SQL, and Databricks Notebooks Design efficient Delta Lake models for reliability and performance Implement and manage Unity Catalog for governance, RBAC, lineage, and secure data sharing Build reusable frameworks using Databricks Workflows, Repos, and Delta Live Tables Create scalable ingestion pipelines for APIs, databases, files, streaming sources, and MDM systems Automate ingestion and workflows using Python and REST APIs Support downstream analytics for BI, data science, and application workloads Write optimized SQL/T-SQL queries, stored procedures, and curated datasets Automate DevOps workflows, testing pipelines, and workspace configurations Additional Skills Azure: Data Factory, Data Lake, Key Vault, Logic Apps, Functions CI/CD: Azure DevOps Orchestration: Apache Airflow (plus) Streaming: Delta Live Tables MDM: Profisee (nice-to-have) Databases: SQL Server, Cosmos DB Soft Skills Strong analytical and problem-solving mindset Excellent communication and cross-team collaboration Detail-oriented with a high sense of ownership and accountability
    $84k-111k yearly est. 3d ago
  • Senior Data Analyst

    Sharp Decisions 4.6company rating

    Phoenix, OR jobs

    We are Hiring Sr. Data Analyst in Charlotte NC for our direct client. Need Local Folks only. Experience with BANKING/FINANCIAL SERVICES CLIENTS IS MANDATORY. Pay range - $40-$45/hr Required Skills:- • Bachelors or Master's degree in Computer Science, Data Engineering, Information Systems, or related field. • 6+ years of experience in data analysis, modeling, and supporting complex data ecosystems. • Hands-on experience with GCP, BigQuery, SQL, and Python. • Proficiency in SQL, Spark, and ETL/ELT processes. • Familiarity with JSON and semi-structured data handling. Understanding retail banking data flows and regulatory requirements. • Strong problem-solving skills and ability to communicate insights to technical and non-technical audiences. Key Responsibilities • Perform advanced data analysis, including trend identification, forecasting, and segmentation. • Design and maintain logical and physical data models aligned with business requirements and compliance standards. • Apply strong knowledge of data warehousing principles, including Fact and Dimension tables, Star and Snowflake schema modeling. • Manage structured and semi-structured data formats (e.g., JSON). • Gather and document data requirements for building enterprise data warehouses, data marts, and BI solutions. • Collaborate with product managers and stakeholders to define KPIs, metrics, and reporting standards. • Ensure data quality, lineage, and maintain an organizational data glossary across all assets. • Document requirements for data models, pipelines, and ETL/ELT processes.
    $40-45 hourly 3d ago

Learn more about Link Technologies jobs