Post job

Data engineer jobs in Central Point, OR

- 15,095 jobs
All
Data Engineer
Data Scientist
Requirements Engineer
Data Modeler
Senior Software Engineer
Software Engineer
  • Engineer IV

    Marriott International, Inc. 4.6company rating

    Data engineer job in Truckee, CA

    Additional InformationPrevious Experience Preferred, Day shift, Open Availability, Weekend Shifts Job Number25193084 Job CategoryEngineering & Facilities LocationThe Ritz-Carlton Lake Tahoe, 13031 Ritz Carlton Highlands Court, Truckee, California, United States, 96161VIEW ON MAP ScheduleFull Time Located Remotely?N Position Type Non-Management Pay Range: $40.80-$40.80 per hour POSITION SUMMARY Inspect, repair, and maintain HVAC, air quality control, and refrigeration equipment. Maintain and conduct daily inspections of the mechanical plant. Monitor and control property temperature. Monitor and analyze energy and utilities usage. Calibrate all controls, gauges, meters, and other equipment. Assemble, install, test, repair, and maintain electrical systems or electronic wiring. Assemble, install, and maintain pipe assemblies, fittings, valves, etc. Perform routine pool and spa maintenance. Follow all company and safety and security policies and procedures; report maintenance problems, safety hazards, accidents, or injuries; and complete safety training and certifications. Ensure uniform and personal appearance are clean and professional, maintain confidentiality of proprietary information, and protect company assets. Welcome and acknowledge all guests according to company standards and anticipate and address guests' service needs. Speak with others using clear and professional language. Develop and maintain positive working relationships with others and listen and respond appropriately to the concerns of other employees. Ensure adherence to quality expectations and standards; and identify, recommend, develop, and implement new ways to increase organizational efficiency, productivity, quality, safety, and/or cost-savings. Reach overhead and below the knees. Move in confined or elevated spaces. Move over sloping, uneven, or slippery surfaces. Move up and down stairs, service ramps and/or a ladder. Grasp, turn, and manipulate objects of varying size and weight. Assist with moving, lifting, carrying, and placing of objects weighing in excess of 100 pounds. Move, lift, carry, push, pull, and place objects weighing less than or equal to 50 pounds without assistance. Perform other reasonable job duties as requested by Supervisors. PREFERRED QUALIFICATIONS Education: Technical, Trade, or Vocational School Degree. Related Work Experience: At least 3 years of related work experience. Supervisory Experience: No supervisory experience. REQUIRED QUALIFICATIONS License or Certification: EPA Universal Certification Driver's License HVAC Certification At Marriott International, we are dedicated to being an equal opportunity employer, welcoming all and providing access to opportunity. We actively foster an environment where the unique backgrounds of our associates are valued and celebrated. Our greatest strength lies in the rich blend of culture, talent, and experiences of our associates. We are committed to non-discrimination on any protected basis, including disability, veteran status, or other basis protected by applicable law. Marriott International considers for employment qualified applicants with criminal histories consistent with applicable federal, state and local law. At more than 100 award-winning properties worldwide, The Ritz-Carlton Ladies and Gentlemen create experiences so exceptional that long after a guest stays with us, the experience stays with them. Attracting the world's top hospitality professionals who curate lifelong memories, we believe that everyone succeeds when they are empowered to be creative, thoughtful and compassionate. Every day, we set the standard for rare and special luxury service the world over and pride ourselves on delivering excellence in the care and comfort of our guests. Your role will be to ensure that the “Gold Standards” of The Ritz-Carlton are delivered graciously and thoughtfully every day. The Gold Standards are the foundation of The Ritz-Carlton and are what guides us each day to be better than the next. It is this foundation and our belief that our culture drives success by which The Ritz Carlton has earned the reputation as a global brand leader in luxury hospitality. As part of our team, you will learn and exemplify the Gold Standards, such as our Employee Promise, Credo and our Service Values. And our promise to you is that we offer the chance to be proud of the work you do and who you work with. In joining The Ritz-Carlton, you join a portfolio of brands with Marriott International. Be where you can do your best work, begin your purpose, belong to an amazing global team, and become the best version of you.
    $40.8-40.8 hourly 4d ago
  • Cybersecurity Engineer III **

    Simventions, Inc.-Glassdoor ✪ 4.6

    Data engineer job in San Diego, CA

    SimVentions, consistently voted one Virginia's Best Places to Work, is looking for an experienced cybersecurity professional to join our team! As a Cybersecurity Engineer III, you will play a key role in advancing cybersecurity operations by performing in-depth system hardening, vulnerability assessment, and security compliance activities in accordance with DoD requirements. The ideal candidate will have a solid foundation in cybersecurity practices and proven experience supporting both Linux and Windows environments across DoD networks. You will work collaboratively with Blue Team, Red Team, and other Cybersecurity professionals on overall cyber readiness defense and system accreditation efforts. ** Position is contingent upon award of contract, anticipated in December of 2025. ** Clearance: An ACTIVE Secret clearance (IT Level II Tier 5 / Special-Sensitive Position) is required for this position. Applicants selected will be subject to a security investigation and may need to meet eligibility requirements for access to classified information. US Citizenship is required to obtain a clearance. Requirements: In-depth understanding of computer security, military system specifications, and DoD cybersecurity policies Strong ability to communicate clearly and succinctly in written and oral presentations Must possess one of the following DoD 8570.01-M IAT Level III baseline certifications: CASP+ CE CCNP Security CISA CISSP (Associate) CISSP GCED GCIH CCSP Responsibilities: Develop Assessment and Authorization (A&A) packages for various systems Develop and maintain security documentation such as: Authorization Boundary Diagram System Hardware/Software/Information Flow System Security Plan Privacy Impact Assessment e-Authentication Implementation Plan System Level Continuous Monitoring Plan Ports, Protocols and Services Registration Plan of Action and Milestones (POA&M) Conduct annual FISMA assessments Perform Continuous Monitoring of Authorized Systems Generate and update test plans; conduct testing of the system components using the Assured Compliance Assessment Solution (ACAS) tool, implement Security Technical Implementation Guides (STIG), and conduct Information Assurance Vulnerability Management (IAVM) reviews Perform automated ACAS scanning, STIG, SCAP checks (Evaluate STIG, Tenable Nessus, etc.) on various standalone and networked systems Analyze cybersecurity test scan results and develop/assist with documenting open findings in the Plan of Action and Milestones (POA&M) Analyze DISA Security Technical Implementation Guide test results and develop/assist with documenting open findings in the Plan of Action and Milestones Preferred Skills and Experience: A combined total of ten (10) years of full-time professional experience in all of the following functional areas: Computer security, military system specifications, and DoD cybersecurity policies National Cyber Range Complex (NCRC) Total Ship Computing Environment (TSCE) Program requirements and mission, ship install requirements, and protocols (preferred) Risk Management Framework (RMF), and the implementation of Cybersecurity and IA boundary defense techniques and various IA-enabled appliances. Examples of these appliances and applications are Firewalls, Intrusion Detection System (IDS), Intrusion Prevention System (IPS), Switch/Routers, Cross Domain Solutions (CDS), EMASS and, Endpoint Security Solution (ESS) Performing STIG implementation Performing vulnerability assessments with the ACAS tool Remediating vulnerability findings to include implementing vendor patches on both Linux and Windows Operating systems Education: Bachelor of Science in Information Systems, Bachelor of Science in Information Technology, Bachelor of Science in Computer Science, Bachelor of Science in Computer Engineering Compensation: Compensation at SimVentions is determined by a number of factors, including, but not limited to, the candidate's experience, education, training, security clearance, work location, skills, knowledge, and competencies, as well as alignment with our corporate compensation plan and contract specific requirements. The projected annual compensation range for this position is $90,000 - $160,000 (USD). This estimate reflects the standard salary range for this position and is just one component of the total compensation package that SimVentions offers. Benefits: At SimVentions, we're committed to supporting the total well-being of our employees and their families. Our benefit offerings include comprehensive health and welfare plans to serve a variety of needs. We offer: Medical, dental, vision, and prescription drug coverage Employee Stock Ownership Plan (ESOP) Competitive 401(k) programs Retirement and Financial Counselors Health Savings and Health Reimbursement Accounts Flexible Spending Accounts Life insurance, short- & long-term disability Continuing Education Assistance Paid Time Off, Paid Holidays, Paid Leave (e.g., Maternity, Paternity, Jury Duty, Bereavement, Military) Third Party Employee Assistance Program that offers emotional and lifestyle well-being services, to include free counseling Supplemental Benefit Program Why Work for SimVentions?: SimVentions is about more than just being a place to work with other growth-orientated technically exceptional experts. It's also a fun place to work. Our family-friendly atmosphere encourages our employee-owners to imagine, create, explore, discover, and do great things together. Support Our Warfighters SimVentions is a proud supporter of the U.S. military, and we take pride in our ability to provide relevant, game-changing solutions to our armed men and women around the world. Drive Customer Success We deliver innovative products and solutions that go beyond the expected. This means you can expect to work with a team that will allow you to grow, have a voice, and make an impact. Get Involved in Giving Back We believe a well-rounded company starts with well-rounded employees, which is why we offer diverse service opportunities for our team throughout the year. Build Innovative Technology SimVentions takes pride in its innovative and cutting-edge technology, so you can be sure that whatever project you work on, you will be having a direct impact on our customer's success. Work with Brilliant People We don't just hire the smartest people; we seek experienced, creative individuals who are passionate about their work and thrive in our unique culture. Create Meaningful Solutions We are trusted partners with our customers and are provided challenging and meaningful requirements to help them solve. Employees who join SimVentions will enjoy additional perks like: Employee Ownership: Work with the best and help build YOUR company! Family focus: Work for a team that recognizes the importance of family time. Culture: Add to our culture of technical excellence and collaboration. Dress code: Business casual, we like to be comfortable while we work. Resources: Excellent facilities, tools, and training opportunities to grow in your field. Open communication: Work in an environment where your voice matters. Corporate Fellowship: Opportunities to participate in company sports teams and employee-led interest groups for personal and professional development. Employee Appreciation: Multiple corporate events throughout the year, including Holiday Events, Company Picnic, Imagineering Day, and more. Founding Partner of the FredNats Baseball team: Equitable distribution of tickets for every home game to be enjoyed by our employee-owners and their families from our private suite. Food: We have a lot of food around here! FTAC
    $90k-160k yearly 2d ago
  • Software Engineer- AI/ML, AWS Neuron

    Annapurna Labs (U.S.) Inc. 4.6company rating

    Data engineer job in Cupertino, CA

    AWS Neuron is the complete software stack for the AWS Inferentia and Trainium cloud-scale machine learning accelerators and the Trn1 and Inf1 servers that use them. This role is for a software engineer in the Machine Learning Applications (ML Apps) team for AWS Neuron. This role is responsible for development, enablement and performance tuning of a wide variety of ML model families, including massive scale large language models like LLama4, Mixtral, DBRX and beyond, as well as stable diffusion, Vision Transformers and many more. The Distributed training team works side by side with chip architects, compiler engineers and runtime engineers to create , build and tune distributed training solutions with Trainium. Experience training these large models using Python is a must. FSDP, Deepspeed and other distributed training libraries are central to this and extending all of this for the Neuron based system is key. Key job responsibilities This role will help lead the efforts building distributed training support into Pytorch and Jax using XLA and the Neuron compiler and runtime stacks. This role will help tune these models to ensure highest performance and maximize the efficiency of them running on the customer AWS Trainium . Strong software development and ML knowledge are both critical to this role. About the team About Us Inclusive Team Culture Here at AWS, we embrace our differences. We are committed to furthering our culture of inclusion. We have ten employee-led affinity groups, reaching 40,000 employees in over 190 chapters globally. We have innovative benefit offerings, and host annual and ongoing learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences. Amazon's culture of inclusion is reinforced within our 16 Leadership Principles, which remind team members to seek diverse perspectives, learn and be curious, and earn trust. Work/Life Balance Our team puts a high value on work-life balance. It isn't about how many hours you spend at home or at work; it's about the flow you establish that brings energy to both parts of your life. We believe striking the right balance between your personal and professional life is critical to life-long happiness and fulfillment. We offer flexibility in working hours and encourage you to find your own balance between your work and personal lives. Mentorship & Career Growth Our team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we're building an environment that celebrates knowledge sharing and mentorship. We care about your career growth and strive to assign projects based on what will help each team member develop into a better-rounded professional and enable them to take on more complex tasks in the future. BASIC QUALIFICATIONS- 3+ years of full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations experience - 3+ years of non-internship professional software development experience - 2+ years of non-internship design or architecture (design patterns, reliability and scaling) of new and existing systems experience - Experience programming with at least one software programming language PREFERRED QUALIFICATIONS- Bachelor's degree in computer science or equivalent - 2+ years of experience with ML system design, including model architecture, and scalable training and inference systems - Experience with modern ML frameworks (TensorFlow, PyTorch, JAX, or similar) and programming in Python or similar languages Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status. Los Angeles County applicants: Job duties for this position include: work safely and cooperatively with other employees, supervisors, and staff; adhere to standards of excellence despite stressful conditions; communicate effectively and respectfully with employees, supervisors, and staff to ensure exceptional customer service; and follow all federal, state, and local laws and Company policies. Criminal history may have a direct, adverse, and negative relationship with some of the material job duties of this position. These include the duties and responsibilities listed above, as well as the abilities to adhere to company policies, exercise sound judgment, effectively manage stress and work safely and respectfully with others, exhibit trustworthiness and professionalism, and safeguard business operations and the Company's reputation. Pursuant to the Los Angeles County Fair Chance Ordinance, we will consider for employment qualified applicants with arrest and conviction records. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit ********************************************************* for more information. If the country/region you're applying in isn't listed, please contact your Recruiting Partner. Our compensation reflects the cost of labor across several US geographic markets. The base pay for this position ranges from $129,300/year in our lowest geographic market up to $223,600/year in our highest geographic market. Pay is based on a number of factors including market location and may vary depending on job-related knowledge, skills, and experience. Amazon is a total compensation company. Dependent on the position offered, equity, sign-on payments, and other forms of compensation may be provided as part of a total compensation package, in addition to a full range of medical, financial, and/or other benefits. For more information, please visit ******************************************************** This position will remain posted until filled. Applicants should apply via our internal or external career site.
    $129.3k-223.6k yearly 1d ago
  • Staff Data Scientist

    Quantix Search

    Data engineer job in San Jose, CA

    Staff Data Scientist | San Francisco | $250K-$300K + Equity We're partnering with one of the fastest-growing AI companies in the world to hire a Staff Data Scientist. Backed by over $230M from top-tier investors and already valued at over $1B, they've secured customers that include some of the most recognizable names in tech. Their AI platform powers millions of daily interactions and is quickly becoming the enterprise standard for conversational AI. In this role, you'll bring rigorous analytics and experimentation leadership that directly shapes product strategy and company performance. What you'll do: Drive deep-dive analyses on user behavior, product performance, and growth drivers Design and interpret A/B tests to measure product impact at scale Build scalable data models, pipelines, and dashboards for company-wide use Partner with Product and Engineering to embed experimentation best practices Evaluate ML models, ensuring business relevance, performance, and trade-off clarity What we're looking for: 5+ years in data science or product analytics at scale (consumer or marketplace preferred) Advanced SQL and Python skills, with strong foundations in statistics and experimental design Proven record of designing, running, and analyzing large-scale experiments Ability to analyze and reason about ML models (classification, recommendation, LLMs) Strong communicator with a track record of influencing cross-functional teams If you're excited by the sound of this challenge- apply today and we'll be in touch.
    $250k-300k yearly 5d ago
  • Data Scientist

    Skale 3.7company rating

    Data engineer job in San Francisco, CA

    We're working with a Series A health tech start-up pioneering a revolutionary approach to healthcare AI, developing neurosymbolic systems that combine statistical learning with structured medical knowledge. Their technology is being adopted by leading health systems and insurers to enhance patient outcomes through advanced predictive analytics. We're seeking Machine Learning Engineers who excel at the intersection of data science, modeling, and software engineering. You'll design and implement models that extract insights from longitudinal healthcare data, balancing analytical rigor, interpretability, and scalability. This role offers a unique opportunity to tackle foundational modeling challenges in healthcare, where your contributions will directly influence clinical, actuarial, and policy decisions. Key Responsibilities Develop predictive models to forecast disease progression, healthcare utilization, and costs using temporal clinical data (claims, EHR, laboratory results, pharmacy records) Design interpretable and explainable ML solutions that earn the trust of clinicians, actuaries, and healthcare decision-makers Research and prototype innovative approaches leveraging both classical and modern machine learning techniques Build robust, scalable ML pipelines for training, validation, and deployment in distributed computing environments Collaborate cross-functionally with data engineers, clinicians, and product teams to ensure models address real-world healthcare needs Communicate findings and methodologies effectively through visualizations, documentation, and technical presentations Required Qualifications Strong foundation in statistical modeling, machine learning, or data science, with preference for experience in temporal or longitudinal data analysis Proficiency in Python and ML frameworks (PyTorch, JAX, NumPyro, PyMC, etc.) Proven track record of transitioning models from research prototypes to production systems Experience with probabilistic methods, survival analysis, or Bayesian inference (highly valued) Bonus Qualifications Experience working with clinical data and healthcare terminologies (ICD, CPT, SNOMED CT, LOINC) Background in actuarial modeling, claims forecasting, or risk adjustment methodologies
    $123k-171k yearly est. 3d ago
  • Data Scientist

    Stand 8 Technology Consulting

    Data engineer job in Long Beach, CA

    STAND 8 provides end to end IT solutions to enterprise partners across the United States and with offices in Los Angeles, New York, New Jersey, Atlanta, and more including internationally in Mexico and India We are seeking a highly analytical and technically skilled Data Scientist to transform complex, multi-source data into unified, actionable insights used for executive reporting and decision-making. This role requires expertise in business intelligence design, data modeling, metadata management, data integrity validation, and the development of dashboards, reports, and analytics used across operational and strategic environments. The ideal candidate thrives in a fast-paced environment, demonstrates strong investigative skills, and can collaborate effectively with technical teams, business stakeholders, and leadership. Essential Duties & Responsibilities As a Data Scientist, participate across the full solution lifecycle: business case, planning, design, development, testing, migration, and production support. Analyze large and complex datasets with accuracy and attention to detail. Collaborate with users to develop effective metadata and data relationships. Identify reporting and dashboard requirements across business units. Determine strategic placement of business logic within ETL or metadata models. Build enterprise data warehouse metadata/semantic models. Design and develop unified dashboards, reports, and data extractions from multiple data sources. Develop and execute testing methodologies for reports and metadata models. Document BI architecture, data lineage, and project report requirements. Provide technical specifications and data definitions to support the enterprise data dictionary. Apply analytical skills and Data Science techniques to understand business processes, financial calculations, data flows, and application interactions. Identify and implement improvements, workarounds, or alternative solutions related to ETL processes, ensuring integrity and timeliness. Create UI components or portal elements (e.g., SharePoint) for dynamic or interactive stakeholder reporting. As a Data Scientist, download and process SQL database information to build Power BI or Tableau reports (including cybersecurity awareness campaigns). Utilize SQL, Python, R, or similar languages for data analysis and modeling. Support process optimization through advanced modeling, leveraging experience as a Data Scientist where needed. Required Knowledge & Attributes Highly self-motivated with strong organizational skills and ability to manage multiple verbal and written assignments. Experience collaborating across organizational boundaries for data sourcing and usage. Analytical understanding of business processes, forecasting, capacity planning, and data governance. Proficient with BI tools (Power BI, Tableau, PBIRS, SSRS, SSAS). Strong Microsoft Office skills (Word, Excel, Visio, PowerPoint). High attention to detail and accuracy. Ability to work independently, demonstrate ownership, and ensure high-quality outcomes. Strong communication, interpersonal, and stakeholder engagement skills. Deep understanding that data integrity and consistency are essential for adoption and trust. Ability to shift priorities and adapt within fast-paced environments. Required Education & Experience Bachelor's degree in Computer Science, Mathematics, or Statistics (or equivalent experience). 3+ years of BI development experience. 3+ years with Power BI and supporting Microsoft stack tools (SharePoint 2019, PBIRS/SSRS, Excel 2019/2021). 3+ years of experience with SDLC/project lifecycle processes 3+ years of experience with data warehousing methodologies (ETL, Data Modeling). 3+ years of VBA experience in Excel and Access. Strong ability to write SQL queries and work with SQL Server 2017-2022. Experience with BI tools including PBIRS, SSRS, SSAS, Tableau. Strong analytical skills in business processes, financial modeling, forecasting, and data flow understanding. Critical thinking and problem-solving capabilities. Experience producing high-quality technical documentation and presentations. Excellent communication and presentation skills, with the ability to explain insights to leadership and business teams. Benefits Medical coverage and Health Savings Account (HSA) through Anthem Dental/Vision/Various Ancillary coverages through Unum 401(k) retirement savings plan Paid-time-off options Company-paid Employee Assistance Program (EAP) Discount programs through ADP WorkforceNow Additional Details The base range for this contract position is $73 - $83 / per hour, depending on experience. Our pay ranges are determined by role, level, and location. The range displayed on each job posting reflects the minimum and maximum target for new hires of this position across all US locations. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Qualified applicants with arrest or conviction records will be considered About Us STAND 8 provides end-to-end IT solutions to enterprise partners across the United States and globally with offices in Los Angeles, Atlanta, New York, Mexico, Japan, India, and more. STAND 8 focuses on the "bleeding edge" of technology and leverages automation, process, marketing, and over fifteen years of success and growth to provide a world-class experience for our customers, partners, and employees. Our mission is to impact the world positively by creating success through PEOPLE, PROCESS, and TECHNOLOGY. Check out more at ************** and reach out today to explore opportunities to grow together! By applying to this position, your data will be processed in accordance with the STAND 8 Privacy Policy.
    $73-83 hourly 1d ago
  • Data Scientist V

    Creospan Inc.

    Data engineer job in Menlo Park, CA

    Creospan is a growing tech collective of makers, shakers, and problem solvers, offering solutions today that will propel businesses into a better tomorrow. “Tomorrow's ideas, built today!” In addition to being able to work alongside equally brilliant and motivated developers, our consultants appreciate the opportunity to learn and apply new skills and methodologies to different clients and industries. ******NO C2C/3RD PARTY, LOOKING FOR W2 CANDIDATES ONLY, must be able to work in the US without sponsorship now or in the future*** Summary: The main function of the Data Scientist is to produce innovative solutions driven by exploratory data analysis from complex and high-dimensional datasets. Job Responsibilities: • Apply knowledge of statistics, machine learning, programming, data modeling, simulation, and advanced mathematics to recognize patterns, identify opportunities, pose business questions, and make valuable discoveries leading to prototype development and product improvement. • Use a flexible, analytical approach to design, develop, and evaluate predictive models and advanced algorithms that lead to optimal value extraction from the data. • Generate and test hypotheses and analyze and interpret the results of product experiments. • Work with product engineers to translate prototypes into new products, services, and features and provide guidelines for large-scale implementation. • Provide Business Intelligence (BI) and data visualization support, which includes, but limited to support for the online customer service dashboards and other ad-hoc requests requiring data analysis and visual support. Skills: • Experienced in either programming languages such as Python and/or R, big data tools such as Hadoop, or data visualization tools such as Tableau. • The ability to communicate effectively in writing, including conveying complex information and promoting in-depth engagement on course topics. • Experience working with large datasets. Education/Experience: • Master of Science degree in computer science or in a relevant field.
    $107k-155k yearly est. 4d ago
  • Lead Data Scientist - Computer Vision

    Straive

    Data engineer job in Santa Clara, CA

    Lead Data Scientist - Computer Vision/Image Processing About the Role We are seeking a Lead Data Scientist to drive the strategy and execution of data science initiatives, with a particular focus on computer vision systems & image processing techniques. The ideal candidate has deep expertise in image processing techniques including Filtering, Binary Morphology, Perspective/Affine Transformation, Edge Detection. Responsibilities Solid knowledge of computer vision programs and image processing techniques: Filtering, Binary Morphology, Perspective/Affine Transformation, Edge Detection Strong understanding of machine learning: Regression, Supervised and Unsupervised Learning Proficiency in Python and libraries such as OpenCV, NumPy, scikit-learn, TensorFlow/PyTorch. Familiarity with version control (Git) and collaborative development practices
    $107k-154k yearly est. 5d ago
  • AI Data Scientist

    PTR Global

    Data engineer job in Cupertino, CA

    Onsite in Cupertino, CA from Day 1 (Client prefer local folks) Hybrid Schedule: 3 Onsite Days (Tue, Wed, Thur) & 2 Remote Days (Mon, Fri) Long term contract Direct client opportunity No Mid layer / No Implementation partners are Involved Key points - Need someone focused on product management and integration of generative AI solutions - Excellent communication, organizational, and problem-solving skills We are seeking an AI Engineer to join our Legal Operations team and lead the design, development and deployment of AI-powered tools and automation solutions that transform how our Legal Department operates. This is a unique opportunity for a technically skilled and product-minded professional who can bridge the gap between engineering, legal, and business functions. You will work closely with attorneys, legal ops specialists, and other engineering teams to identify opportunities for AI-driven efficiency, develop prototypes and bring scalable solutions to life. The ideal candidate combines strong software engineering and AI expertise with excellent communication skills, product sensibility and a curiosity about legal workflows and technology Description As a Senior Data Scientist/ AI Engineer, you will be responsible for overseeing the design and execution of key tool development programs. This is a unique opportunity for a technically skilled and product-minded professional who can bridge the gap between engineering, legal, and business functions. You will work closely with attorneys, legal ops specialists, and other engineering teams to identify opportunities for AI-driven efficiency, develop prototypes and bring scalable solutions to life. The ideal candidate combines strong software engineering and AI expertise with excellent communication skills, product sensibility and a curiosity about legal workflows and technology. Key responsibilities may include: Develop and deploy AI solutions that enhance legal workflows, including contract review, document classification, knowledge management and workflow automation. Collaborate cross-functionally with attorneys, legal operations, compliance and engineering teams to identify and prioritize AI use cases. Act as a product developer and owner from concept to rollout-defining requirements, developing proofs of concept, collecting feedback and iterating solutions. Integrate large language models (LLMs) and other AI technologies into existing systems (e.g., document management, eDiscovery, CLM, or knowledge bases). Evaluate and integrate third-party legal AI tools and platforms as needed, ensuring compatibility and compliance with internal systems. Maintain strong documentation and governance around data usage, model performance and ethical AI standards. Stay current on emerging trends in AI, machine learning and legal tech to help shape the department's AI strategy. Minimum Qualifications Bachelor's degree in Computer Science, Data Science, Engineering, or related field (or equivalent experience). 5+ years of experience building and deploying AI/ML or automation solutions in production environments. Strong programming skills in Python (proven ability to quickly master new frameworks and tools). Demonstrated experience with modern AI architectures including context engineering, tool use and retrieval augmented generation. Proven ability to communicate complex technical concepts to non-technical stakeholders. Strong product development mindset-able to translate business needs into practical, scalable AI tools. Prior experience in or exposure to legal tech or legal operations. Preferred Qualifications Familiarity with DMS, document intelligence and CLM systems (e.g., Ironclad, Icertis, DocuSign CLM), document management platforms (e.g., iManage, NetDocuments) or legal AI tools (e.g., Harvey, Luminance, Casetext, Spellbook, etc.). Experience building internal AI assistants or chatbots for enterprise knowledge retrieval. Understanding of data privacy, compliance and governance frameworks relevant to legal data. Pay Range: $65/hr - $70/hr The specific compensation for this position will be determined by a number of factors, including the scope, complexity and location of the role as well as the cost of labor in the market; the skills, education, training, credentials and experience of the candidate; and other conditions of employment. Our full-time consultants have access to benefits including medical, dental, vision as well as 401K contributions.
    $65 hourly 4d ago
  • Principal Data Scientist

    Hiretalent-Staffing & Recruiting Firm

    Data engineer job in Alhambra, CA

    The Principal Data Scientist works to establish a comprehensive Data Science Program to advance data-driven decision-making, streamline operations, and fully leverage modern platforms including Databricks, or similar, to meet increasing demand for predictive analytics and AI solutions. The Principal Data Scientist will guide program development, provide training and mentorship to junior members of the team, accelerate adoption of advanced analytics, and build internal capacity through structured mentorship. The Principal Data Scientist will possess exceptional communication abilities, both verbal and written, with a strong customer service mindset and the ability to translate complex concepts into clear, actionable insights; strong analytical and business acumen, including foundational experience with regression, association analysis, outlier detection, and core data analysis principles; working knowledge of database design and organization, with the ability to partner effectively with Data Management and Data Engineering teams; outstanding time management and organizational skills, with demonstrated success managing multiple priorities and deliverables in parallel; a highly collaborative work style, coupled with the ability to operate independently, maintain focus, and drive projects forward with minimal oversight; a meticulous approach to quality, ensuring accuracy, reliability, and consistency in all deliverables; and proven mentorship capabilities, including the ability to guide, coach, and upskill junior data scientists and analysts. 5+ years of professional experience leading data science initiatives, including developing machine learning models, statistical analyses, and end-to-end data science workflows in production environments. 3+ years of experience working with Databricks and similar cloud-based analytics platforms, including notebook development, feature engineering, ML model training, and workflow orchestration. 3+ years of experience applying advanced analytics and predictive modeling (e.g., regression, classification, clustering, forecasting, natural language processing). 2+ years of experience implementing MLOps practices, such as model versioning, CI/CD for ML, MLflow, automated pipelines, and model performance monitoring. 2+ years of experience collaborating with data engineering teams to design data pipelines, optimize data transformations, and implement Lakehouse or data warehouse architectures (e.g., Databricks, Snowflake, SQL-based platforms). 2+ years of experience mentoring or supervising junior data scientists or analysts, including code reviews, training, and structured skill development. 2+ years of experience with Python and SQL programming, using data sources such as SQL Server, Oracle, PostgreSQL, or similar relational databases. 1+ year of experience operationalizing analytics within enterprise governance frameworks, partnering with Data Management, Security, and IT to ensure compliance, reproducibility, and best practices. Education: This classification requires possession of a Master's degree or higher in Data Science, Statistics, Computer Science, or a closely related field. Additional qualifying professional experience may be substituted for the required education on a year-for-year basis. At least one of the following industry-recognized certifications in data science or cloud analytics, such as: • Microsoft Azure Data Scientist Associate (DP-100) • Databricks Certified Data Scientist or Machine Learning Professional • AWS Machine Learning Specialty • Google Professional Data Engineeror equivalent advanced analytics certifications. The certification is required and may not be substituted with additional experience.
    $97k-141k yearly est. 3d ago
  • Lead Data Engineer - (Automotive exp)

    Intelliswift-An LTTS Company

    Data engineer job in Torrance, CA

    Role: Sr Technical Lead Duration: 12+ Month Contract Daily Tasks Performed: Lead the design, development, and deployment of a scalable, secure, and high-performance CDP SaaS product. Architect solutions that integrate with various data sources, APIs, and third-party platforms. Design, develop, and optimize complex SQL queries for data extraction, transformation, and analysis Build and maintain workflow pipelines using Digdag, integrating with data platforms such as Treasure Data, AWS, or other cloud services Automate ETL processes and schedule tasks using Digdag's YAML-based workflow definitions Implement data quality checks, logging, and alerting mechanisms within workflow Leverage AWS services (e.g., S3, Lambda, Athena) where applicable to enhance data processing and storage capabilities Ensure best practices in software engineering, including code reviews, testing, CI/CD, and documentation. Oversee data privacy, security, and compliance initiatives (e.g., GDPR, CCPA). Ensure adherence to security, compliance, and data governance requirements. Oversee development of real-time and batch data processing systems. Collaborate with cross-functional teams including data analysts, product managers, and software engineers to translate business requirements into technical solutions Collaborate with the stakeholders to define technical requirements to align technical solutions with business goals and deliver product features. Mentor and guide developers, fostering a culture of technical excellence and continuous improvement. Troubleshoot complex technical issues and provide hands-on support as needed. Monitor, troubleshoot, and improve data workflows for performance, reliability, and cost-efficiency as needed Optimize system performance, scalability, and cost efficiency. What this person will be working on: As the Senior Technical Lead for our Customer Data Platform (CDP), the candidate will define the technical strategy, architecture, and execution of the platform. They will lead the design and delivery of scalable, secure, and high-performing solutions that enable unified customer data management, advanced analytics, and personalized experiences. This role demands deep technical expertise, strong leadership, and a solid understanding of data platforms and modern cloud technologies. It is a pivotal position that supports the CDP vision by mentoring team members and delivering solutions that empower our customers to unify, analyze, and activate their data. Position Success Criteria (Desired) - 'WANTS' Bachelor's or Master's degree in Computer Science, Engineering, or related field. 8+ years of software development experience, with at least 3+ years in a technical leadership role. Proven experience building and scaling SaaS products, preferably in customer data, marketing technology, or analytics domains Extensive hands-on experience with Presto, Hive, and Python Strong proficiency in writing complex SQL queries for data extraction, transformation, and analysis Familiarity with AWS data services such as S3, Athena, Glue, and Lambda Deep understanding of data modeling, ETL pipelines, workflow orchestration, and both real-time and batch data processing Experience ensuring data privacy, security, and compliance in SaaS environments Knowledge of Customer Data Platforms (CDPs), CDP concepts, and integration with CRM, marketing, and analytics tools Excellent communication, leadership, and project management skills Experience working with Agile methodologies and DevOps practices Ability to thrive in a fast-paced, agile environment Collaborative mindset with a proactive approach to problem-solving Stay current with industry trends and emerging technologies relevant to SaaS and customer data platforms.
    $100k-141k yearly est. 4d ago
  • Sr Data Platform Engineer

    The Judge Group 4.7company rating

    Data engineer job in Elk Grove, CA

    Hybrid role 3X a week in office in Elk Grove, CA; no remote capabilities This is a direct hire opportunity. We're seeking a seasoned Senior Data Platform Engineer to design, build, and optimize scalable data solutions that power analytics, reporting, and AI/ML initiatives. This full‑time role is hands‑on, working with architects, analysts, and business stakeholders to ensure data systems are reliable, secure, and high‑performing. Responsibilites: Build and maintain robust data pipelines (structured, semi‑structured, unstructured). Implement ETL workflows with Spark, Delta Lake, and cloud‑native tools. Support big data platforms (Databricks, Snowflake, GCP) in production. Troubleshoot and optimize SQL queries, Spark jobs, and workloads. Ensure governance, security, and compliance across data systems. Integrate workflows into CI/CD pipelines with Git, Jenkins, Terraform. Collaborate cross‑functionally to translate business needs into technical solutions. Qualifications: 7+ years in data engineering with production pipeline experience. Expertise in Spark ecosystem, Databricks, Snowflake, GCP. Strong skills in PySpark, Python, SQL. Experience with RAG systems, semantic search, and LLM integration. Familiarity with Kafka, Pub/Sub, vector databases. Proven ability to optimize ETL jobs and troubleshoot production issues. Agile team experience and excellent communication skills. Certifications in Databricks, Snowflake, GCP, or Azure. Exposure to Airflow, BI tools (Power BI, Looker Studio).
    $108k-153k yearly est. 3d ago
  • AI Data Engineer

    Hartleyco

    Data engineer job in San Jose, CA

    Member of Technical Staff - AI Data Engineer San Francisco (In-Office) $150K to $225K + Equity A high-growth, AI-native startup coming out of stealth is hiring AI Data Engineers to build the systems that power production-grade AI. The company has recently signed a Series A term sheet and is scaling rapidly. This role is central to unblocking current bottlenecks across data engineering, context modeling, and agent performance. Responsibilities: • Build distributed, reliable data pipelines using Airflow, Temporal, and n8n • Model SQL, vector, and NoSQL databases (Postgres, Qdrant, etc.) • Build API and function-based services in Python • Develop custom automations (Playwright, Stagehand, Zapier) • Work with AI researchers to define and expose context as services • Identify gaps in data quality and drive changes to upstream processes • Ship fast, iterate, and own outcomes end-to-end Required Experience: • Strong background in data engineering • Hands-on experience working with LLMs or LLM-powered applications • Data modeling skills across SQL and vector databases • Experience building distributed systems • Experience with Airflow, Temporal, n8n, or similar workflow engines • Python experience (API/services) • Startup mindset and bias toward rapid execution Nice To Have: • Experience with stream processing (Flink) • dbt or Clickhouse experience • CDC pipelines • Experience with context construction, RAG, or agent workflows • Analytical tooling (Posthog) What You Can Expect: • High-intensity, in-office environment • Fast decision-making and rapid shipping cycles • Real ownership over architecture and outcomes • Opportunity to work on AI systems operating at meaningful scale • Competitive compensation package • Meals provided plus full medical, dental, and vision benefits If this sounds like you, please apply now.
    $150k-225k yearly 2d ago
  • Data Engineer, Knowledge Graphs

    Mithrl

    Data engineer job in San Francisco, CA

    We imagine a world where new medicines reach patients in months, not years, and where scientific breakthroughs happen at the speed of thought. Mithrl is building the world's first commercially available AI Co-Scientist. It is a discovery engine that transforms messy biological data into insights in minutes. Scientists ask questions in natural language, and Mithrl responds with analysis, novel targets, hypotheses, and patent-ready reports. No coding. No waiting. No bioinformatics bottlenecks. We are one of the fastest growing tech bio companies in the Bay Area with 12x year over year revenue growth. Our platform is used across three continents by leading biotechs and big pharmas. We power breakthroughs from early target discovery to mechanism-of-action. And we are just getting started. ABOUT THE ROLE We are hiring a Data Engineer, Knowledge Graphs to build the infrastructure that powers Mithrl's biological knowledge layer. You will partner closely with the Data Scientist, Knowledge Graphs to take curated knowledge sources and transform them into scalable, reliable, production ready systems that serve the entire platform. Your work includes building ETL pipelines for large biological datasets, designing schemas and storage models for graph structured data, and creating the API surfaces that allow ML engineers, application teams, and the AI Co-Scientist to query and use the knowledge graph efficiently. You will also own the reliability, performance, and versioning of knowledge graph infrastructure across releases. This role is the bridge between biological knowledge ingestion and the high performance engineering systems that use it. If you enjoy working on data modeling, schema design, graph storage, ETL, and scalable infrastructure, this is an opportunity to have deep impact on the intelligence layer of Mithrl. WHAT YOU WILL DO Build and maintain ETL pipelines for large public biological datasets and curated knowledge sources Design, implement, and evolve schemas and storage models for graph structured biological data Create efficient APIs and query surfaces that allow internal teams and AI systems to retrieve nodes, relationships, pathways, annotations, and graph analytics Partner closely with the Data Scientists to operationalize curated relationships, harmonized variable IDs, metadata standards, and ontology mappings Build data models that support multi tenant access, versioning, and reproducibility across releases Implement scalable storage and indexing strategies for high volume graph data Maintain data quality, validate data integrity, and build monitoring around ingestion and usage Work with ML engineers and application teams to ensure the knowledge graph infrastructure supports downstream reasoning, analysis, and discovery applications Support data warehousing, documentation, and API reliability Ensure performance, reliability, and uptime for knowledge graph services WHAT YOU BRING Required Qualifications Strong experience as a data engineer or backend engineer working with data intensive systems Experience building ETL or ELT pipelines for large structured or semi structured datasets Strong understanding of database design, schema modeling, and data architecture Experience with graph data models or willingness to learn graph storage concepts Proficiency in Python or similar languages for data engineering Experience designing and maintaining APIs for data access Understanding of versioning, provenance, validation, and reproducibility in data systems Experience with cloud infrastructure and modern data stack tools Strong communication skills and ability to work closely with scientific and engineering teams Nice to Have Experience with graph databases or graph query languages Experience with biological or chemical data sources Familiarity with ontologies, controlled vocabularies, and metadata standards Experience with data warehousing and analytical storage formats Previous work in a tech bio company or scientific platform environment WHAT YOU WILL LOVE AT MITHRL You will build the core infrastructure that makes the biological knowledge graph fast, reliable, and usable Team: Join a tight-knit, talent-dense team of engineers, scientists, and builders Culture: We value consistency, clarity, and hard work. We solve hard problems through focused daily execution Speed: We ship fast (2x/week) and improve continuously based on real user feedback Location: Beautiful SF office with a high-energy, in-person culture Benefits: Comprehensive PPO health coverage through Anthem (medical, dental, and vision) + 401(k) with top-tier plans
    $110k-157k yearly est. 4d ago
  • Bigdata Engineer

    Net2Source (N2S

    Data engineer job in Mountain View, CA

    Net2Source is a Global Workforce Solutions Company headquartered at NJ, USA with its branch offices in Asia Pacific Region. We are one of the fastest growing IT Consulting company across the USA and we are hiring " Bigdata Engineer " for one of our clients. We offer a wide gamut of consulting solutions customized to our 450+ clients ranging from Fortune 500/1000 to Start-ups across various verticals like Technology, Financial Services, Healthcare, Life Sciences, Oil & Gas, Energy, Retail, Telecom, Utilities, Technology, Manufacturing, the Internet, and Engineering. Position: Bigdata Engineer Location: MTV, CA (Onsite) - Locals Only Type: Contract Exp Level - 10+ Years Required Skills Min of 7+ years working with Apache Flink and Apache Spark 5+ years' experience with Java Strong expertise in Python Expertise developing new pipelines Adept at supporting and enhancing existing pipelines Strong experience with AWS Stack Why Work With Us? We believe in more than just jobs-we build careers. At Net2Source, we champion leadership at all levels, celebrate diverse perspectives, and empower you to make an impact. Think work-life balance, professional growth, and a collaborative culture where your ideas matter. Our Commitment to Inclusion & Equity Net2Source is an equal opportunity employer, dedicated to fostering a workplace where diverse talents and perspectives are valued. We make all employment decisions based on merit, ensuring a culture of respect, fairness, and opportunity for all, regardless of age, gender, ethnicity, disability, or other protected characteristics. Awards & Recognition America's Most Honored Businesses (Top 10%) Fastest-Growing Staffing Firm by Staffing Industry Analysts INC 5000 List for Eight Consecutive Years Top 100 by Dallas Business Journal Spirit of Alliance Award by Agile1 Maddhuker Singh Sr Account & Delivery Manager ***********************
    $110k-157k yearly est. 3d ago
  • Snowflake Data Engineer (DBT SQL)

    Maganti It Resources, LLC 3.9company rating

    Data engineer job in San Jose, CA

    Job Description - Snowflake Data Engineer (DBT SQL) Duration: 6 months Key Responsibilities • Design, develop, and optimize data pipelines using Snowflake and DBT SQL. • Implement and manage data warehousing concepts, metadata management, and data modeling. • Work with data lakes, multi-dimensional models, and data dictionaries. • Utilize Snowflake features such as Time Travel and Zero-Copy Cloning. • Perform query performance tuning and cost optimization in cloud environments. • Administer Snowflake architecture, warehousing, and processing. • Develop and maintain PL/SQL Snowflake solutions. • Apply design patterns for scalable and maintainable data solutions. • Collaborate with cross-functional teams and tech leads across multiple tracks. • Provide technical and functional guidance to team members. Required Skills & Experience • Hands-on Snowflake development experience (mandatory). • Strong proficiency in SQL and DBT SQL. • Knowledge of data warehousing concepts, metadata management, and data modeling. • Experience with data lakes, multi-dimensional models, and data dictionaries. • Expertise in Snowflake features (Time Travel, Zero-Copy Cloning). • Strong background in query optimization and cost management. • Familiarity with Snowflake administration and pipeline development. • Knowledge of PL/SQL and SQL databases (additional plus). • Excellent communication, leadership, and organizational skills. • Strong team player with a positive attitude.
    $119k-167k yearly est. 1d ago
  • Senior ML Data Engineer

    Midjourney

    Data engineer job in San Jose, CA

    We're the data team behind Midjourney's image generation models. We handle the dataset side: processing, filtering, scoring, captioning, and all the distributed compute that makes high-quality training data possible. What you'd be working on: Large-scale dataset processing and filtering pipelines Training classifiers for content moderation and quality assessment Models for data quality and aesthetic evaluation Data visualization tools for experimenting on dataset samples Testing/simulating distributed inference pipelines Monitoring dashboards for data quality and pipeline health Performance optimization and infrastructure scaling Occasionally jumping into inference optimization and other cross-team projects Our current stack: PySpark, Slurm, distributed batch processing across hybrid cloud setup. We're pragmatic about tools - if there's something better, we'll switch. We're looking for someone strong in either: Data engineering/ML pipelines at scale, or Cloud/infrastructure with distributed systems experience Don't need exact tech matches - comfort with adjacent technologies and willingness to learn matters more. We work with our own hardware plus GCP and other providers, so adaptability across different environments is valuable. Location: SF office a few times per week (we may make exceptions on location for truly exceptional candidates) The role offers variety, our team members often get pulled into different projects across the company, from dataset work to inference optimization. If you're interested in the intersection of large-scale data processing and cutting-edge generative AI, we'd love to hear from you.
    $110k-156k yearly est. 5d ago
  • Data Engineer

    RSM Solutions, Inc. 4.4company rating

    Data engineer job in Irvine, CA

    Thank you for stopping by to take a look at the Data Integration Engineer role I posted here on LinkedIN, I appreciate it. If you have read my s in the past, you will recognize how I write job descriptions. If you are new, allow me to introduce myself. My name is Tom Welke. I am Partner & VP at RSM Solutions, Inc and I have been recruiting technical talent for more than 23 years and been in the tech space since the 1990s. Due to this, I actually write JD's myself...no AI, no 'bots', just a real live human. I realized a while back that looking for work is about as fun as a root canal with no anesthesia...especially now. So, rather than saying 'must work well with others' and 'team mindset', I do away with that kind of nonsense and just tell it like it is. So, as with every role I work on, social fit is almost as important as technical fit. For this one, technical fit is very very important. But, we also have some social fit characteristics that are important. This is the kind of place that requires people to dive in and learn. The hiring manager for this one is actually a very dear friend of mine. He said something interesting to me not all that long ago. He mentioned, if you aren't spending at least an hour a day learning something new, you really are doing yourself a disservice. This is that classic environment where no one says 'this is not my job'. So that ability to jump in and help is needed for success in this role. This role is being done onsite in Irvine, California. I prefer working with candidates that are already local to the area. If you need to relocate, that is fine, but there are no relocation dollars available. I can only work with US Citizens or Green Card Holders for this role. I cannot work with H1, OPT, EAD, F1, H4, or anyone that is not already a US Citizen or Green Card Holder for this role. The Data Engineer role is similar to the Data Integration role I posted. However, this one is mor Ops focused, with the orchestration of deployment and ML flow, and including orchestrating and using data on the clusters and managing how the models are performing. This role focuses on coding & configuring on the ML side of the house. You will be designing, automating, and observing end to end data pipelines that feed this client's Kubeflow driven machine learning platform, ensuring models are trained, deployed, and monitored on trustworthy, well governed data. You will build batch/stream workflows, wire them into Azure DevOps CI/CD, and surface real time health metrics in Prometheus + Grafana dashboards to guarantee data availability. The role bridges Data Engineering and MLOps, allowing data scientists to focus on experimentation and the business sees rapid, reliable predictive insight. Here are some of the main responsibilities: Design and implement batch and streaming pipelines in Apache Spark running on Kubernetes and Kubeflow Pipelines to hydrate feature stores and training datasets. Build high throughput ETL/ELT jobs with SSIS, SSAS, and T SQL against MS SQL Server, applying Data Vault style modeling patterns for auditability. Integrate source control, build, and release automation using GitHub Actions and Azure DevOps for every pipeline component. Instrument pipelines with Prometheus exporters and visualize SLA, latency, and error budget metrics to enable proactive alerting. Create automated data quality and schema drift checks; surface anomalies to support a rapid incident response process. Use MLflow Tracking and Model Registry to version artifacts, parameters, and metrics for reproducible experiments and safe rollbacks. Work with data scientists to automate model retraining and deployment triggers within Kubeflow based on data freshness or concept drift signals. Develop PowerShell and .NET utilities to orchestrate job dependencies, manage secrets, and publish telemetry to Azure Monitor. Optimize Spark and SQL workloads through indexing, partitioning, and cluster sizing strategies, benchmarking performance in CI pipelines. Document lineage, ownership, and retention policies; ensure pipelines conform to PCI/SOX and internal data governance standards. Here is what we are seeking: At least 6 years of experience building data pipelines in Spark or equivalent. At least 2 years deploying workloads on Kubernetes/Kubeflow. At least 2 years of experience with MLflow or similar experiment‑tracking tools. At least 6 years of experience in T‑SQL, Python/Scala for Spark. At least 6 years of PowerShell/.NET scripting. At least 6 years of experience with with GitHub, Azure DevOps, Prometheus, Grafana, and SSIS/SSAS. Kubernetes CKA/CKAD, Azure Data Engineer (DP‑203), or MLOps‑focused certifications (e.g., Kubeflow or MLflow) would be great to see. Mentor engineers on best practices in containerized data engineering and MLOps.
    $111k-166k yearly est. 5d ago
  • Analytics Data Modeler

    Rogue Credit Union 3.7company rating

    Data engineer job in Medford, OR

    Department: Enterprise Analytics Department Reports To: Enterprise Analytics Manager Classification: Exempt Rogue Credit Union is a member-owned, not-for-profit financial cooperative headquartered in the Rogue Valley located in beautiful southern Oregon. Providing services to our members in select Oregon and Idaho counties. We strive to make an impact in the communities we serve by living, giving, and supporting local. Our mission is to provide exceptional member experiences that build mutually beneficial relationships to create the most loyal members in the nation. Here at Rogue, we believe in having a culture of happy team members who feel supported, have the chance to grow their careers, and LOVE where they work! Role: The Analytics Data Modeler is responsible for designing and maintaining the semantic data layer and analytics catalog that support governed, scalable, and business-aligned reporting and analytics across Rogue Credit Union. This role serves as the product owner of the semantic layer and data/analytics catalog, translating business definitions into technical requirements and enabling trusted, self-service insights. The Data Modeler collaborates closely with Analytics Engineers, BI Analysts, and business stakeholders to ensure consistent terminology, validated metrics, and high-quality data models that support strategic decision-making. Essential Functions & Responsibilities: 30% Design, build, and maintain semantic models that support enterprise reporting and analytics. Define KPIs, measures, hierarchies, and dimensions with clear business logic that aligns with governance-approved definitions. Optimize models for scalability, query performance, and ease of use in BI tools such as Tableau Server, Tableau Cloud and Power BI. Continuously refine models based on stakeholder feedback and evolving business needs. 25% Engage directly with business stakeholders to validate terminology, definitions, and reporting logic. Document business requirements in a way that can be translated into technical specifications for Analytics Engineers to develop master data. Act as a bridge between technical and non-technical teams, ensuring semantic models accurately reflect the business use case. 15% Administer and curate the enterprise analytics catalog, ensuring all semantic models and datasets include complete metadata (definitions, lineage, and usage notes). Maintain glossary entries and steward assignments. Manage certification workflows so users can distinguish between exploratory data sources and trusted, production-ready models. 10% Contribute to data governance efforts by aligning semantic models with enterprise data definitions and quality standards. Support impact analysis by documenting dependencies across datasets, models, and reports. Partner with stewards and governance bodies to ensure data quality, consistent reporting logic, and alignment in terminology across the credit union. 10% Collaborate with the Data Architect and Analytics Engineers to integrate semantic models into the enterprise data platform. Ensure compatibility with Microsoft Fabric, Lakehouse/OneLake, Tableau Cloud, and Power BI. Participate in architecture testing, deployment, and versioning processes to ensure smooth adoption of future platform enhancements. 5% Assist with configuration of analytics tools to support metadata and governance needs. Contribute to metadata alignment efforts across Tableau's Data Management Add-On, Microsoft Purview, and related tools. Ensure metadata in these systems remains synchronized with the analytics catalog and semantic models. 5% Proactively performs other duties as needed or assigned. Knowledge & Skills: Experience: The role requires three to five years of experience in data modeling, metadata management, or analytics engineering, with a proven ability to translate business requirements into technical specifications. It involves designing and managing semantic data models in Microsoft Fabric or a similar platform, including building semantic layers and collaborating with business stakeholders to ensure alignment on terms, definitions, and reporting logic. Familiarity with BI tools such as Tableau and Power BI, SQL-based environments like SQL Server and Oracle SQL Developer, and data cataloging platforms such as Alation, Collibra, or Atlan is essential. Experience in financial services is preferred. Education: (1) A bachelor's degree, or (2) achievement of formal certifications recognized in the industry as equivalent to a bachelor's degree (e.g., information technology certifications in lieu of a degree). Interpersonal Skills: Work involves much personal contact with others inside and/or outside the organization for the purpose of first-level conflict resolution, building relationships, and soliciting cooperation. Discussions involve a higher degree of confidentiality and discretion, requiring diplomacy and tact in communication. Other Skills: The role requires strong SQL skills with experience in relational databases such as SQL Server and Oracle, along with proficiency in Tableau (including the data management add-on), Knime, and metadata tools. Familiarity with Microsoft Fabric, Lakehouse/OneLake, and Purview is important, as well as an understanding of data governance, cataloging, and semantic modeling best practices. Experience with data modeling tools like dbt, or similar is expected, combined with excellent communication, documentation, and stakeholder engagement skills. Physical Requirements: Some lifting required (up to 30 pounds), continuous sitting, occasional standing and occasional walking. Finger dexterity while using computer. Occasional talking and hearing in person and on the telephone. Continuous clarity of vision at 20 inches or less and continuous clarity of vision at 20 feet or more. Work Environment: This job is an exempt position that requires flexibility in scheduling tasks and projects. The employee must independently monitor hours and judge the time needed to be spent on applicable duties. Employee must be capable of adjusting to unpredictable schedules proactively. As a direct representative of the Credit Union, the employee must conduct and present themselves courteously and professionally. Exposed to potentially hazardous conditions, i.e., robbery. Disclaimer: Rogue Credit Union is a drug free workplace and requires a drug screening test within 48 hours of employment offer. Rogue Credit Union is an Equal Opportunity Employer and makes employment decisions without regard to race, color, national origin, religion, sex, age, disability, veteran status, or any other protected class. If you are an individual with a disability and require a reasonable accommodation to complete any part of the application process or are limited in the ability or unable to access or use this online application process and need an alternative method, please call **************. This Job Description is not a complete statement of all duties and responsibilities comprising the position.
    $88k-105k yearly est. Auto-Apply 60d+ ago
  • SOESD - Senior Software Developer

    Southern Oregon Education Service District 3.6company rating

    Data engineer job in Medford, OR

    Technology/Senior Software Developer About SOESD Southern Oregon ESD is located in the Rogue Valley, the cultural and economic heart of southern Oregon, home to legendary snow skiing, white-water rafting, kayaking, hiking, fishing, as well as numerous cultural venues and events such as art galleries, the Britt Music Festivals, the Ashland Independent Film Festival, and the renowned Oregon Shakespeare Festival. For more information on living in the Rogue Valley, please see: Education: ************************ Cultural Opportunities: ************************ Recreational Opportunities: ************************* Health Care: ************************* Senior Software Developer - Technology Services Position Goal: Create, maintain, troubleshoot, and repair software, data databases, and other information systems to support academic and administrative computing that enhances teaching and learning. While the emphasis of this position is software development, duties may include support for computers, servers, storage, related technology, and associated services. Responsibility: Performs duties with limited supervision. Work involves significant project management, and coordination of human and technical resources, synthesis of operational systems, and evaluation and recommendation of work to be performed following agency policies and procedures to support the agency's mission. Work is characterized by the responsibility for the work of others, and by the ability to make independent decisions within established guidelines that align department goals with agency goals. Essential Functions: 1. Perform all functions of Software Developer III. 2. Demonstrates initiative in forecasting and designing new services to meet changing regional needs. 3. Determine the viability of new regional projects to meet changing regional needs. 4. Assist supervisor with designing and implementing regional projects, determining project prerequisites, overseeing milestone attainment, and coordinating resources. 5. Design, implement, and monitor regional service delivery to maximize availability, ensure integrity, and protect against intrusion. 6. Assist the supervisor in managing the software development life cycle. 7. Provide support for Software Developer III. 8. Other duties as assigned. Minimum Prerequisites: 1. All requirements of Software Developer III. 2. Performance at the level of Software Developer III for at least one year. 3. Demonstrated capabilities of a Senior Software Developer. The equivalent combination of experience, training, certification, education, or degree that provides the required knowledge, skills, and abilities may be used as an alternative to prerequisites. Location: In person at the Grape St. Office, 101 N. Grape Street, Medford, OR 97501. Length of Position: 1.0/258 days (annual year-round position) Salary: Per Classified Collective Bargaining Agreement, $34.27/hr. - $40.99/hr. Supervisor: Program Administrator SOESD Benefits (For .50 FTE and Over): Southern Oregon ESD contributes up to 2114.00 per month (family coverage) for health, dental, and vision insurance premiums for qualified employees. Employer-paid PERS (Public Employee Retirement System), including an additional 6% individual investment account with PERS. 9 paid holidays A generous sick, vacation, and discretionary (personal) leave packet is also provided to qualified employees. Health Insurance: OEBB MODA Includes medical, dental, vision & prescription insurance The monthly premium is deducted pre-tax Options: Health Savings Accounts Flexible Spending Accounts - medical, day-care expenses Optional Coverage for Additional Life, AD&D, and Short Term Disability Insurance through Aflac or American Fidelity 403(b) Mercy Flights Membership Physical Requirements The following physical requirements are essential functions of the System Analyst III: 1. Stand/Walk: 1-4 Hrs/Day 2. Sit: 1-4 Hrs/Day 3. Drive: 1-4 Hrs/Day 4. Bending: Occasionally 5. Squat: Occasionally 6. Climb Stairs: Occasionally 7. Single Grasping: Occasionally 8. Pushing: Occasionally 9. Pulling: Occasionally 10. Fine Manipulation: Occasionally 11. Repetitive Foot Controls: Limited 12. Lifting (less than 25 lbs): Occasionally 13. Lifting (25-50 lbs): Occasionally 14. Lifting (50-75 lbs): Occasionally 15. Lifting (75-100 lbs): Limited Note: This is not necessarily an exhaustive or all-inclusive list of responsibilities, skills, duties, requirements, efforts, functions, or working conditions associated with the job. This job description is not a contract of employment or a promise or guarantee of any specific terms or conditions of employment. The school district may add to, modify, or delete any aspect of this job (or the position itself) at any time as it deems advisable. EQUAL OPPORTUNITY SOESD is an equal opportunity employer that complies with applicable employment discrimination laws, including ORS Ch. 659, Title VII of the Civil Rights Act of 1964, Title IX of the Educational Amendments of 1972, the Rehabilitation Act of 1973, the Americans with Disabilities Act, and the Age Discrimination in Employment Act.
    $34.3-41 hourly 60d+ ago

Learn more about data engineer jobs

How much does a data engineer earn in Central Point, OR?

The average data engineer in Central Point, OR earns between $74,000 and $142,000 annually. This compares to the national average data engineer range of $80,000 to $149,000.

Average data engineer salary in Central Point, OR

$102,000
Job type you want
Full Time
Part Time
Internship
Temporary