Post job

Data engineer jobs in Menifee, CA

- 1,463 jobs
All
Data Engineer
Data Scientist
Data Consultant
Software Engineer
Game Engineer
Senior Software Engineer
Hadoop Developer
  • Applied Data Scientists

    Mercor

    Data engineer job in Ontario, CA

    **1\. Role Overview**Mercor is seeking applied data science professionals to support a strategic analytics initiative with a global enterprise. This contract-based opportunity focuses on extracting insights, building statistical models, and informing business decisions through advanced data science techniques. Freelancers will translate complex datasets into actionable outcomes using tools like Python, SQL, and visualization platforms. This short-term engagement emphasizes experimentation, modeling, and stakeholder communication - distinct from production ML engineering. **2\. Key Responsibilities** ● Translate business questions into data science problems and analytical workflows ● Conduct data wrangling, exploratory analysis, and hypothesis testing ● Develop statistical models and predictive tools for decision support ● Create compelling data visualizations and dashboards for business users ● Present findings and recommendations to non-technical stakeholders **3\. Ideal Qualifications** ● 5+ years of applied data science or analytics experience in business settings ● Proficiency in Python or R (pandas, NumPy, Jupyter) and strong SQL skills ● Experience with data visualization tools (e.g., Tableau, Power BI) ● Solid understanding of statistical modeling, experimentation, and A/B testing ● Strong communication skills for translating technical work into strategic insights **4\. More About the Opportunity** ● Remote ● **Expected commitment: min 30 hours/week ● Project duration: ~6 weeks** **5\. Compensation & Contract Terms** ● $75-100/hour ● Paid weekly via Stripe Connect ● You'll be classified as an independent contractor **6\. Application Process** ● Submit your resume followed by domain expertise interview and short form **7.About Mercor** ● Mercor is a talent marketplace that connects top experts with leading AI labs and research organizations ● Our investors include Benchmark, General Catalyst, Adam D'Angelo, Larry Summers, and Jack Dorsey ● Thousands of professionals across domains like law, creatives, engineering, and research have joined Mercor to work on frontier projects shaping the next era of AI
    $75-100 hourly 17d ago
  • Data Engineer

    ABI Document Support Services

    Data engineer job in Loma Linda, CA

    ABI Document Support Services is seeking a highly skilled Data Engineer with proven experience in Power BI and data pipeline development. This person will ideally work onsite out of the Loma Linda, CA office Tuesday, Wednesday and Thursdays and remote Mondays and Fridays. Apply fast, check the full description by scrolling below to find out the full requirements for this role. The ideal candidate will design, build, and maintain scalable data infrastructure to support analytics and reporting needs across the organization. You'll work closely with business stakeholders and analysts to ensure data accuracy, performance, and accessibility. Design, build, and maintain ETL/ELT pipelines for structured and unstructured data from various sources. Develop and manage data models, data warehouses, and data lakes to support analytics and BI initiatives. Create and optimize Power BI dashboards and reports, ensuring accurate data representation and performance. Collaborate with cross-functional teams to identify business requirements and translate them into technical solutions. Implement and maintain data quality, governance, and security standards. Monitor and optimize data pipeline performance, identifying bottlenecks and opportunities for improvement. Integrate data from multiple systems using tools such as Azure Synapse, Microsoft Fabric, SQL, Python, or Spark. Support data validation, troubleshooting, and root cause analysis for data inconsistencies. Document data processes, architecture, and system configurations. Required: Bachelor's degree in Computer Science, Data Engineering, Information Systems, or a related field. 4-6 years of experience as a Data Engineer, BI Developer, or related role Strong proficiency in SQL and data modeling techniques. Hands-on experience with Power BI (DAX, Power Query, and data visualization best practices). Experience with ETL tools and data orchestration frameworks (e.g., Azure Synapse, Airflow, or SSIS). Familiarity with cloud data platforms (Azure, AWS, or GCP). Strong understanding of data warehousing concepts (e.g., star schema, snowflake schema). Preferred: Experience with Python or Scala for data processing. Knowledge of Azure Synapse, Databricks, or similar technologies. Understanding of CI/CD pipelines and version control systems (Git). Exposure to data governance and security frameworks. Soft Skills Strong analytical and problem-solving abilities. Excellent communication and collaboration skills. Detail-oriented with a focus on data accuracy and integrity. Ability to work independently and manage multiple priorities in a fast-paced environment. WHO WE ARE ABI Document Support Services is the largest nationwide provider of records retrieval, subpoena services, and document management for the legal and insurance industries. There is no other company in the market that provides the volume of successfully retrieved records or the document management solutions that ABI offers. Our singular focus is records retrieval and the most advanced technology solutions for our clients to manage, analyze and summarize those retrieved records. We are committed to continually raising the bar for cost effective record retrieval and more thorough analysis and summarization. Equal Opportunity Employer. xevrcyc All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, age, pregnancy, genetic information, disability, status as a protected veteran, or any other protected category under applicable federal, state, and local laws. Equal Opportunity Employer - Minorities/Females/Disabled/Veterans ABI offers a fast-paced team atmosphere with competitive benefits (medical, vision, dental), paid time off, and 401k. #LI-MB1
    $99k-140k yearly est. 1d ago
  • Senior Data Consultant - Supply Chain Planning

    Bristlecone 3.9company rating

    Data engineer job in Corona, CA

    🚀 We're Hiring: Senior Data Consultant - (Supply Chain Planning) Bristlecone, a Mahindra company, is a leading supply chain and business analytics advisor, rated by Gartner as one of the top ten system integrators in the supply chain space. We have been a trusted partner to global enterprises such as Applied Materials, Exxon Mobil, Flextronics, Nestle, Unilever, Whirlpool, and many others. 🔍 Project Overview: We are looking for a strong Data Consultant to support our planning projects. The ideal candidate will have a solid understanding of planning processes and data management within a supply chain or business planning environment. While deep configuration knowledge of SAP IBP is not mandatory, the consultant must have a strong grasp of planning data, business rules, and their impact on planning outcomes. This is a strategic initiative aimed at transforming planning processes across Raw Materials, Finished Goods, and Packaging materials. You'll be the go-to expert for managing end-to-end planning data across SAP IBP and ECC systems (SD, MM, PP). 🛠️ Key Responsibilities: Collaborate with planning teams to analyze, validate, and manage data relevant to planning processes. Demonstrate a clear understanding of basic planning functionalities and how data supports them. Identify, define, and manage data elements that impact demand, supply, and inventory planning. Understand and document business rules and prerequisites related to data maintenance and planning accuracy. Coordinate data collection activities from super users and end users across multiple functions. Support data readiness for project milestones including testing, validation, and go-live. Explain how different data elements influence planning outcomes to non-technical stakeholders. Work closely with functional and technical teams to ensure data integrity and consistency across systems. Required Skills & Qualifications: Strong understanding of planning processes (demand, supply, or S&OP). Proven experience working with planning master data (e.g., product, location, BOM, resources, etc.). Ability to analyze complex datasets and identify inconsistencies or dependencies. Excellent communication and coordination skills with cross-functional teams. Exposure to SAP IBP, APO, or other advanced planning tools (preferred but not mandatory). Strong business acumen with the ability to link data quality to planning outcomes. 5-10 years of relevant experience in data management, planning, or supply chain roles. Preferred Qualifications: Experience with large-scale planning transformation or ERP implementation projects. Knowledge of data governance and data quality frameworks. Experience in working with super users/end users for data validation and readiness. Privacy Notice Declarations for California based candidates/Jobs:: ********************************************************
    $85k-113k yearly est. 3d ago
  • Senior Software Engineer

    RIS Rx 3.6company rating

    Data engineer job in Orange, CA

    Job Title: Sr. Software Engineering Reports to: CTO FLSA Status: Full-time, Exempt About Our Organization: RIS Rx (pronounced “RISE”) is a healthcare technology organization with a strong imprint in the patient access and affordability space. RIS Rx has quickly become an industry leader in delivering impactful solutions to stakeholders across the healthcare continuum. RIS Rx is proud to offer an immersive service portfolio to help address common access barriers. We don't believe in a “one size fits all” approach to our service offerings. Our philosophy is to bring forward innovation, value and service to everything that we do. This approach has allowed us to have the opportunity to serve countless patients to help produce better treatment outcomes and an overall improved quality of life. Here at RIS Rx, we invite our partners and colleagues to “Rise Up” with us to bring accessible healthcare and solutions for all. Job Summary We are seeking a highly skilled Senior Software Engineer to lead the design, development, and optimization of advanced technology solutions that address revenue leakage and operational challenges for pharmaceutical manufacturers. This role will play a key part in shaping scalable healthcare technology platforms, mentoring engineering talent, and driving architectural and process improvements. The Senior Software Engineer will collaborate with cross-functional teams, including product, clinical, and operations stakeholders to deliver secure, high-quality, and innovative software solutions. The ideal candidate is a hands-on technical leader with expertise in modern software development practices, cloud-native architectures, and healthcare or pharmaceutical systems. Responsibilities Lead the design, development, and maintenance of complex technology solutions that identify and mitigate gross-to-net (GTN) revenue leakage for pharmaceutical manufacturers Mentor junior engineers and provide technical guidance on architecture decisions, code quality, and best practices Collaborate with cross-functional teams including product managers, pharmacists, operations, and other software engineers to deliver high-quality software solutions Drive technical initiatives and lead architectural discussions for scalable healthcare technology platforms serving multiple pharmaceutical manufacturers Write clean, efficient, and well-documented code following established coding standards and best practices while establishing new standards for the team Lead code reviews to ensure code quality, maintainability, and knowledge sharing across the team Debug and troubleshoot complex software issues, implementing fixes and optimizations for mission-critical systems Provide advanced production support for systems, including monitoring, incident response, resolution of critical issues, and post-incident analysis Research and evaluate emerging technologies and industry trends, making recommendations for technology adoption and development process improvements Lead agile development processes including sprint planning, daily standups, and retrospectives, while coaching team members on agile best practices Skills 5+ years of experience in software development with advanced proficiency in languages like TypeScript and frameworks like React Strong commitment to software quality with deep understanding of design patterns, clean code practices, and software architecture principles Advanced experience with AWS cloud services, infrastructure-as-code, and cloud-native development patterns Experience with database systems like PostgreSQL, SQL query optimization, and data modeling Advanced experience with web development technologies including HTML/CSS and modern JavaScript frameworks Experience leading technical projects and mentoring other developers Proven experience leading Agile/Scrum teams and development practices Experience with system design, scalability considerations, and performance optimization Understanding of healthcare data standards and pharmaceutical industry processes preferred Worked on projects that used CI/CD pipelines, automated testing, and DevOps practices Strong leadership and mentoring skills with ability to guide technical decision-making Excellent problem-solving skills and ability to work independently while leading cross-functional initiatives Exceptional communication skills and ability to explain complex technical concepts to both technical and non-technical stakeholders Education This position requires a Bachelor's degree in Computer Science, Software Engineering, or a related technical field
    $114k-152k yearly est. 2d ago
  • Senior Software Engineer - Full Stack & DevOps

    Beacon Healthcare Systems 4.5company rating

    Data engineer job in Huntington Beach, CA

    We're seeking a Senior Software Engineer who thrives at the intersection of application development and DevOps. You'll design, build, and deploy scalable SaaS solutions for Medicare and Medicaid health plans, while also contributing to the automation, reliability, and security of our development lifecycle. This role is central to delivering high-quality features for our Compliance, Appeals & Grievances, and Universe Scrubber products. Key Responsibilities: · Application Development Design and implement backend services, APIs, and user interfaces using modern frameworks and cloud-native architecture. Ensure performance, scalability, and maintainability across the stack. · DevOps Integration Collaborate with infrastructure and DevOps teams to build and maintain CI/CD pipelines, automate deployments, and optimize environment provisioning across development, QA, and production. · Cloud-Native Engineering Develop and deploy applications on AWS, leveraging services like Lambda, ECS, RDS, and S3. Ensure solutions are secure, resilient, and compliant with healthcare regulations. · Quality & Compliance Write clean, testable code and participate in peer reviews, unit testing, and performance tuning. Ensure all software adheres to CMS, HIPAA, and internal compliance standards. · AI-Enabled Features Support integration of AI/ML capabilities into product workflows, such as intelligent routing of grievances or automated compliance checks. · Mentorship & Collaboration Provide technical guidance to junior engineers and collaborate with cross-functional teams to translate healthcare business needs into technical solutions. Qualifications: Bachelor's degree in computer science or related field 5+ years of experience in software development, with exposure to DevOps practices Proficiency in languages such as Java, Python, or C#, and experience with cloud platforms (preferably AWS) Familiarity with CI/CD tools (e.g., Jenkins, GitHub Actions), infrastructure-as-code (e.g., Terraform, Ansible), and containerization (e.g., Docker, Kubernetes) Understanding of healthcare data formats (EDI, HL7, FHIR) and regulatory frameworks
    $112k-147k yearly est. 19h ago
  • Data Scientist

    Alignment Healthcare 4.7company rating

    Data engineer job in Orange, CA

    Alignment Health is breaking the mold in conventional health care, committed to serving seniors and those who need it most: the chronically ill and frail. It takes an entire team of passionate and caring people, united in our mission to put the senior first. We have built a team of talented and experienced people who are passionate about transforming the lives of the seniors we serve. In this fast-growing company, you will find ample room for growth and innovation alongside the Alignment Health community. Working at Alignment Health provides an opportunity to do work that really matters, not only changing lives but saving them. Together. Alignment Healthcare is a data and technology driven healthcare company focused partnering with health systems, health plans and provider groups to provide care delivery that is preventive, convenient, coordinated, and that results in improved clinical outcomes for seniors. We are seeking a mission-driven Data Scientist to join our growing team and support risk adjustment strategy within our Medicare Advantage line of business. This role focuses on enhancing risk score accuracy, CMS audit preparedness (RADV), and building AI-powered tools that improve clinical documentation review and integrity. You'll play a key role in advancing AVA, our proprietary clinical intelligence platform, by developing next-generation models that support autonomous chart review and NLP/GenAI-driven documentation analytics. This is a unique opportunity to work at the intersection of healthcare, compliance, and machine learning-transforming how we ensure both quality and regulatory alignment. Job Duties/Responsibilities: Collaborate with key business leaders to understand their business problems and come up with analytical solutions. Applying coding skills and knowledge data structures to develop projects in partnership with other scientists and engineers in the team Build customer segmentation models to better understand our customers and tailor the clinical outcome and healthcare care experience for them. Build and fine-tune models for both LLMs and OCR-based document understanding, enabling accurate extraction from scanned or low-quality medical charts. Develop scalable model pipelines that integrate NLP, computer vision, and unstructured data, leveraging cloud-based infrastructure (Azure) and containerized environments. Collaborate with engineering teams to version, test, and deploy models using Git, CI/CD pipelines, and virtual machine (VM) environments. Design algorithms to predict audit risk and detect documentation anomalies across cohorts and markets. Partner with Coding, Compliance, CDI, Clinical, and Legal teams to ensure data outputs are aligned with CMS guidance. Help standardize definitions, documentation logic, and reporting workflows to scale enterprise-wide AI-readiness. Help analytical support for CMS Star Ratings strategy, including: Audit sampling methodology validation Chart review error pattern identification Root cause analysis on deletion rates and extrapolation exposure Supervisory Responsibilities: N/A MINIMUM REQUIREMENTS: To perform this job successfully, an individual must be able to perform each essential duty satisfactorily. The requirements listed below are representative of the knowledge, skill, and/or ability required. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions. Minimum Experience: 2+ years of relevant experience in predictive modeling and analysis Education/Licensure: Required: PhD in Computer Science, Engineering, Mathematics, Statistics, or related field, or equivalent experience Other: Excellent communication, analytical and collaborative problem-solving skills Experience in building end to end data science solutions and applying machine learning methods to real world problems with measurable outcomes. Deep understanding and experience with various machine learning algorithms, including deep neural networks, natural language processing and LLMs. Solid data structures & algorithms background. Strong programming skills in one of the following: Python, Java, R, Scala or C++ Demonstrated proficiency in SQL and relational databases. Experience with data visualization and presentation, turning complex analysis into insight. Experience in setting experimental analytics frameworks or strategies for complex scenarios. Understanding of relevant statistical measures such as confidence intervals, significance of error measurements, development, and evaluation data sets, etc. Experience with manipulating and analyzing complex, high-volume, high-dimensionality and unstructured data from varying sources Preferred Qualifications: Healthcare experience Experience in Big Data processing technologies: Databricks Experience in Azure, AWS or other cloud ecosystems. Experience in NoSQL databases. Published work in academic conferences or industry circles. Demonstrable track record dealing well with ambiguity, prioritizing needs, and delivering results in an agile, dynamic startup environment Knowledge of CMS Risk Adjustment Data Validation (RADV) audits Work Environment: The work environment characteristics described here are representative of those an employee encounters while performing the essential functions of this job. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions. ESSENTIAL PHYSICAL FUNCTIONS: The physical demands described here are representative of those that must be met by an employee to successfully perform the essential functions of this job. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions. While performing the duties of this job, the employee is regularly required to talk or hear. The employee regularly is required to stand, walk, sit, use hand to finger, handle or feel objects, tools, or controls; and reach with hands and arms. The employee frequently lifts and/or moves up to 10 pounds. Specific vision abilities required by this job include close vision and the ability to adjust focus. Alignment Healthcare, LLC is proud to practice Equal Employment Opportunity and Affirmative Action. We are looking for diversity in qualified candidates for employment: Minority/Female/Disable/Protected Veteran. If you require any reasonable accommodation under the Americans with Disabilities Act (ADA) in completing the online application, interviewing, completing any pre-employment testing or otherwise participating in the employee selection process, please contact ******************. Pay Range: $149,882.00 - $224,823.00 Pay range may be based on a number of factors including market location, education, responsibilities, experience, etc. Alignment Health is an Equal Opportunity/Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, age, protected veteran status, gender identity, or sexual orientation. *DISCLAIMER: Please beware of recruitment phishing scams affecting Alignment Health and other employers where individuals receive fraudulent employment-related offers in exchange for money or other sensitive personal information. Please be advised that Alignment Health and its subsidiaries will never ask you for a credit card, send you a check, or ask you for any type of payment as part of consideration for employment with our company. If you feel that you have been the victim of a scam such as this, please report the incident to the Federal Trade Commission at ******************************* If you would like to verify the legitimacy of an email sent by or on behalf of Alignment Health's talent acquisition team, please email ******************.
    $149.9k-224.8k yearly Auto-Apply 60d+ ago
  • Data Scientist

    Opendoor Technologies Inc.

    Data engineer job in Ontario, CA

    Data Scientist, Pricing Opendoor is transforming one of the largest, most complex markets in the world - residential real estate - using data at massive scale. Every pricing signal we generate directly impacts how we value homes, how we manage risk, and how efficiently capital moves through our marketplace. The work is highly leveraged: the quality of our pricing decisions influences conversion, margins, customer trust, and the company's financial performance. We are looking for mid to senior level Data Scientists. In this role, you will be a core driver of how Opendoor prices real estate at scale. You'll operate at the intersection of economics, machine learning, experimentation, and product strategy - tackling ambiguity, shaping the pricing roadmap, and building models/analyses that materially move the business. Your insights will influence how we evaluate millions of dollars of housing inventory - and directly shape outcomes for our customers, our balance sheet, and the health of our marketplace. What You'll Do * Build and maintain pricing metrics, dashboards, and frameworks. * Run experiments and causal analyses to measure impact and drive decisions. * Develop predictive + statistical models that improve pricing accuracy. * Partner closely with Product, Engineering, and Operations teams to influence roadmap and model deployment. * Deliver insights and narratives that inform executive strategy. Skills & Qualifications * Deep statistical reasoning: hypothesis design, experimental design, causal inference, and ability to distinguish signal vs noise. * Proven end-to-end ML ownership: data acquisition, feature engineering, model development, validation, deployment, and ongoing monitoring. * Strong SQL + Python proficiency; comfortable working with production data pipelines and modern ML tooling (e.g., Spark, Airflow, Ray, SageMaker, Vertex, etc.). * Demonstrated ability to translate complex analytical findings into clear business recommendations and influence cross-functional decision-making. * Experience working with ill-defined problems and driving clarity on problem definition, success metrics, and realistic tradeoffs. * High data-quality bar: disciplined approach to validation, bias analysis, and making decisions rooted in evidence vs intuition. * Effective communicator - able to tell the story behind the model to both highly technical and non-technical audiences. Base salary range for this role varies. Generally, the base salary range is $135,000 - $199,000 CAN annually + RSUs + bonus + ESPP + additional employee benefits (medical/dental/vision, life insurance, unlimited PTO, 401K). JR 9200 #LI-KC1 About us…. Powering life's progress, one move at a time Since 2014, we've been reinventing life's most important transaction with a new, simple way to buy and sell a home. The traditional real estate process is broken, and our mission is clear: build a digital, end-to-end experience that makes buying and selling a home simple and certain. We're a team of problem solvers, innovators, and operators building the largest, most trusted platform for residential real estate. Whether it's starting a family, taking a new job, or making a life change, we help people move forward with confidence. This work isn't easy, and it's not for everyone. But if you want to be part of a team that's tilting the world in favor of people who want to sell, buy, or own a home then you'll find purpose here. Opendoor Values Openness We believe that being open about who we are and what we do allows us to be better. Individuals seeking employment at Opendoor are considered without regard to race, color, religion, national origin, age, sex, marital status, ancestry, physical or mental disability, veteran status, sexual orientation, gender identity or other protected status under all applicable laws, regulations, and ordinances. We collect, use, and disclose applicant personal information as described in our personnel privacy policies. To learn more, you can find the policy details for California residents here and for Canada residents here. We are committed to assisting members of the military community in utilizing their skills at Opendoor. U.S. candidates are able to review your military job classification at MyNextMove.org and apply for positions that align with your expertise. At Opendoor, we are committed to providing reasonable accommodations throughout our recruitment processes for candidates with disabilities, pregnancy, religious beliefs, or other reasons protected by applicable laws. If you require assistance or a reasonable accommodation, please contact us at ********************************.
    $135k-199k yearly Auto-Apply 8d ago
  • Data Scientist

    Origence

    Data engineer job in Irvine, CA

    With 30 years at the forefront of fintech innovation, we specialize in SaaS lending solutions that lead the industry. Our core mission is customer-centric, focusing on empowering Credit Unions across the United States with the tools to offer accessible, competitive lending services. We're deeply committed to enhancing the financial ecosystem for a broad network of credit unions, members and auto dealers. We invest in our greatest assets, our employees, and foster a culture of innovation and ownership through freedom and responsibility. We celebrate fiscal accountability, operational rigor and efficiency to create a sustainably healthy and robust business for the long term. About you You are a self-driven, conscientious, fiscally responsible, self-aware, passionate and compassionate professional. You are comfortable with ambiguity, eternally curious, and love problem solving. You operate as an owner and work with a growth mindset. You are extremely productive on your own, and act as a multiplier collaborating with others. You are tireless in questioning the status quo and pursue the best answers to the hardest problems to the benefit of the business. Your focus is strong and capable of context switching and pivoting with the business. In the vacuum of leadership, you assume it. We are seeking an experienced AI/ML Data Scientist to join Origence's AI team and lead the development of cutting-edge artificial intelligence solutions for our lending technology platform. The successful candidate will design and deploy advanced ML models, implement AI-driven automation across our lending processes, and spearhead the integration of generative AI technologies to enhance our digital lending solutions for credit unions and community banks. This role offers the opportunity to work at the forefront of AI in fintech, building production-scale machine learning systems that process millions of lending decisions while exploring emerging AI technologies like large language models and computer vision for document processing. What You ll Be Doing: AI/ML Model Development & Deployment Design, develop, and deploy advanced machine learning models for real-time credit decisioning, and document processing. Build and optimize neural networks for fraud detection, loan default prediction, and customer lifetime value modeling using TensorFlow, PyTorch, or similar frameworks Implement automated machine learning (AutoML) pipelines to streamline model development and hyperparameter optimization Create natural language processing solutions for loan application analysis, customer service automation, and regulatory compliance monitoring AI Infrastructure & MLOps Build and maintain scalable ML infrastructure using cloud platforms (Primarily Azure, but can extend to AWS, GCP) Implement MLOps best practices including model versioning, continuous integration/deployment, and automated monitoring Develop A/B testing frameworks for ML models to measure performance and business impact in production Create automated model retraining pipelines and drift detection systems Generative AI & Emerging Technologies Research and implement generative AI applications for loan document processing, customer communication, and risk assessment reporting Develop retrieval-augmented generation (RAG) systems for intelligent customer support and regulatory compliance assistance Explore large language model fine-tuning for financial domain-specific applications Investigate federated learning approaches for privacy-preserving model training across multiple financial institutions Collaboration & Communication Create data visualizations, dashboards, and reports to communicate findings to product teams, risk management, and executive leadership Collaborate with cross-functional teams including product development, risk management, and engineering to define analytical requirements and KPIs for lending products Present complex financial analyses to diverse stakeholders, translating technical findings into business-actionable insights Experimentation & Optimization Design and conduct A/B tests to optimize loan application processes, approval rates, and customer experience across Origence's digital lending platform Implement statistical testing methodologies to measure the impact of product changes and business initiatives Continuously monitor and improve model performance and accuracy The Ideal Candidate: Education: Master's degree or PhD in Machine Learning, Artificial Intelligence, Computer Science, Statistics, or related technical field Experience: 4+ years of hands-on experience developing and deploying ML models in production environments, preferably in financial services or fintech Proven track record of building end-to-end ML systems from research to production deployment Experience with both supervised and unsupervised learning techniques, deep learning, and reinforcement learning Specialized Skills: Technical Skills Expert-level proficiency in Python and ML frameworks (TensorFlow, PyTorch, scikit-learn, XGBoost, LightGBM) Strong experience with cloud ML platforms (Primarily Azure, but can extend to AWS, GCP) Proficiency in MLOps tools and practices (MLflow, CI/CD for ML) Experience with big data technologies (Spark, Kafka, Airflow) and distributed computing Knowledge of SQL, NoSQL databases, and data engineering principles Familiarity with model interpretation techniques (SHAP, LIME) and explainable AI methods AI/ML Specializations Experience with transformer architectures, attention mechanisms, and large language models Knowledge of computer vision techniques for document processing and image analysis Understanding of natural language processing, text mining, and sentiment analysis Familiarity with generative AI models (GANs, VAEs, diffusion models) Experience with time series forecasting and sequential modeling techniques Domain Knowledge Understanding of loan origination principles and lending industry metrics Knowledge of common financial ratios, loan performance indicators, and risk assessment methodologies Awareness of regulatory environment affecting consumer and commercial lending Preferred Qualifications: Advanced AI/ML Experience PhD in Machine Learning, AI, or related field with publications in top-tier conferences (NeurIPS, ICML, ICLR, etc.) 6+ years of experience building production ML systems at scale in fintech, banking, or related regulated industries Experience leading AI research initiatives or managing ML engineering teams Track record of patents or open-source contributions in AI/ML Cutting-Edge AI Technologies Hands-on experience with foundation models, prompt engineering, and LLM fine-tuning (GPT, BERT, T5, etc.) Knowledge of multi-modal AI systems combining text, vision, and structured data Experience with federated learning, differential privacy, and privacy-preserving ML techniques Familiarity with quantum machine learning or neuromorphic computing approaches Understanding of AI ethics, bias detection, and fairness in ML systems Technical Leadership Experience with MLOps at enterprise scale, including model governance and risk management Knowledge of edge AI deployment and model optimization for mobile/embedded systems Background in distributed training, model parallelism, and large-scale inference systems Familiarity with AI accelerators (GPUs, TPUs, specialized AI chips) and performance optimization Why you should apply: Flexible Working Environment Paid Time Off 401k (8% match) College Tuition Benefits/ Tuition Reimbursement Good Benefits options Company Culture! Cultural and Holiday celebrations, Theme days like Star Wars Day & Bring your Kids to Work Day, Monthly Townhalls and Quarterly Company Meetings that ensure awareness, inclusion, and transparency. The starting salary range for this full-time position in Irvine, CA is $147900 - $184900 per year. This base pay will take into consideration internal equity, candidate s geographic region, job-related knowledge and experience among other factors. Origence maintains a highly competitive compensation program. Under company guidelines, this position is eligible for an annual bonus to provide an incentive to achieve targeted goals. Bonuses are awarded at company s discretion on an individual basis. Origence is an equal opportunity employer. All recruitment, hiring, training, compensation, benefits, discipline, and other terms and conditions of employment will be based upon an individuals qualifications regardless of race, religion, color, sex, gender identity, sexual orientation, national origin, ancestry, military service, marital status, pregnancy, age, protected medical condition, genetic information, disability or any other category protected by federal, state or local law.
    $147.9k-184.9k yearly 60d+ ago
  • Data Scientist - Physics

    TAE Technologies 4.0company rating

    Data engineer job in Irvine, CA

    Do Epic Science TAE is the world's first private fusion energy company, founded in 1998 to commercialize the cleanest, safest, most affordable, and sustainable form of carbon-free power. We are applying science and engineering to design transformational technologies. Whether it's harnessing fusion through the science of stars, making exponential leaps in power efficiency, or innovating medical care with a novel cancer treatment: We're turning the promise of science into reality. We're looking for candidates who are passionate about realizing our mission: A future where all people have affordable access to reliable, abundant, and environmentally friendly fusion-generated electricity. About The Role We are seeking skilled data scientists to join our Data Science team at TAE Technologies. The experimental groups generate large and complex data sets, and only through advanced post-processing and modeling can these data reveal the secrets to the behavior of magnetically confined fusion plasma. By leveraging these insights, we aim to tackle the challenges of fusion energy development and deepen our understanding of plasma behavior. About You Pipeline Development and Maintenance: Help plan, develop, deploy, and maintain the entire Data Science pipeline at TAE, from data acquisition and storage to processing, visualization, and deployment. Data Processing and Knowledge Distillation: Collaborate with scientific staff and domain experts to iterate on prototype algorithms, converting them into robust scientific data processing routines. Software Product Life Cycle Management: Be responsible for all stages of the software product life cycle, including planning, analysis, coding, debugging, testing, deployment, and maintenance. Experimental Operations Support: Serve as a vital part of daily experimental operations by real-time monitoring and processing of data, ensuring experiments proceed smoothly, producing high-quality results. Task Management and Collaboration: Effectively manage multiple tasks independently and in a collaborative environment, ensuring timely and successful completion. Required Skills PhD in physics or related field 3+ years developing in python Experience using common python data analysis and visualization packages such as NumPy, SciPy, Pandas, and Matplotlib Able to work in a diverse R&D environment and communicate effectively and professionally with co-workers at all levels Passion for data integrity as this is central to what we do as a team Like to have: Ph.D. in plasma physics Familiarity with one or more Machine Learning libraries like Scikit-Learn, TensorFlow, and/or Keras Exposure to applying Bayesian statistical inference tools to experimental data Familiarity and comfort using the command line Education Ph.D. in Physics or related field At TAE Technologies, we consider a wide range of factors when making compensation decisions including but not limited to skill sets; experience and job-related knowledge, training; licenses and certifications, and other business and organizational needs. The total compensation package for this position may also include other elements depending on the position offered (non-Sr., Sr., Lead or Manager). The compensation range for these roles are $130,000 - $150,000 About US Imagination, skill, and will We are a diverse team of over 500 engineers, scientists, professionals, Maxwell Prize winners, and big thinkers from more than 40 countries with a track record of delivering on the innovative ways science can lead humanity into a brighter era. We are not afraid to envision a future where fusion and science can transform our world. What you'll get with us * Generous benefits such as Medical, Dental, Vision, 401 (K) with company match, paid vacation + sick time, companywide December holiday, wellness program, parental leave * Payment rewards: For referring talent, novel research, and patents * A collaborative environment: An organization where talents and interests can plug in to different groups throughout the organization Potential for equity participation * HQ in Southern California * Employee events on and off-site * A commitment to upholding and growing an inclusive organization Learn more tae.com Our podcast Good Clean Energy Instagram LinkedIn TAE Technologies is an Equal Opportunity Employer. All applicants will be considered for employment without attention to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran, or disability status. We ensure all individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, perform essential job functions, and receive other benefits and privileges of employment. Please contact us ****************** to request accommodations or request more information. Note to Agencies: TAE prefers to hire directly and maintains an existing preferred supplier list. We do not accept speculative CVs or referrals from agencies. If speculative CVs are sent, no fee will be applicable.
    $130k-150k yearly 56d ago
  • Data Scientist

    Vsolvit

    Data engineer job in Norco, CA

    We are seeking a talented and driven Data Scientist with Secret Clearance to join our dynamic team. In this role, you will apply advanced data analytics and machine learning techniques to solve complex problems in a variety of domains. You will work with large datasets, develop predictive models, and support decision-making processes while adhering to the highest standards of security and confidentiality. As with any position, additional expectations exist. Some of these are, but are not limited to, adhering to normal working hours, meeting deadlines, following company policies as outlined by the Employee Handbook, communicating regularly with assigned supervisor(s), and staying focused on the assigned tasks including company meetings, and completing other tasks as assigned. Responsibilities Analyze large and complex datasets to extract actionable insights that will inform business strategies and decisions Develop, implement, and maintain predictive models and machine learning algorithms Collaborate with cross-functional teams to understand business needs and translate them into technical solutions Use statistical methods, data mining, and machine learning to uncover trends and patterns within the data Prepare reports and presentations that effectively communicate your findings to both technical and non-technical stakeholders Ensure compliance with all security protocols and confidentiality requirements, adhering to industry standards and internal policies Support senior leadership in strategic decision-making through advanced data-driven insights Stay up-to-date with the latest developments in data science, machine learning, and related fields Provide technical guidance and mentorship to junior data scientists and analysts Basic Qualifications US Citizenship Required Active Secret Security Clearance If applicable: If you are or have been recently employed by the U.S. government, a post-employment ethics letter will be required if employment with VSolvit is offered Bachelor's degree in Computer Science, Data Science, Mathematics, Statistics, Engineering, or related field (Master's preferred) At least 3+ years of experience as a Data Scientist or in a related data analysis role Strong proficiency in Python, R, or similar programming languages for data analysis Expertise in machine learning frameworks (e.g., TensorFlow, PyTorch, Scikit-learn) Strong knowledge of SQL and experience with relational databases Familiarity with data visualization tools like Tableau, Power BI, or similar Solid understanding of statistical modeling, hypothesis testing, and data-driven decision-making Excellent communication skills, with the ability to present technical findings to non-technical stakeholder If applicable: If you are or have been recently employed by the U.S. government, a post-employment ethics letter will be required if employment with VSolvit is offered Preferred Qualifications Experience working with classified government data or in a defense-related environment Familiarity with cloud platforms (AWS, Azure, etc.) and data processing tools Knowledge of big data technologies (e.g., Hadoop, Spark) Experience with DevOps practices and version control (Git, GitHub) Strong problem-solving skills and ability to work in a fast-paced environment Machine Learning experience is a plus Company Summary Join the VSolvit Team! Founded in 2006, VSolvit (pronounced 'We Solve It') is a technology services provider that specializes in cybersecurity, cloud computing, geographic information systems (GIS), business intelligence (BI) systems, data warehousing, engineering services, and custom database and application development. VSolvit is an award winning WOSB, CA CDB, MBE, WBE, and CMMI Level 3 certified company. We offer a customizable health benefits program that best meets the needs of its employees. Offering may include: medical, dental, and vision insurance, life insurance, long and short-term disability and other insurance products, Health Savings Account, Flexible Spending Account, 401K Retirement Plan options, Tuition Reimbursement, and assorted voluntary benefits. Our goal is to grow together and enjoy the work that we do as a team. VSolvit LLC is an Equal Opportunity/Affirmative Action employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, national origin, protected veteran status, or disability status.
    $96k-139k yearly est. Auto-Apply 13d ago
  • Data Scientist

    Careconnectmd Inc. 3.6company rating

    Data engineer job in Costa Mesa, CA

    Job Description We are seeking a strong skilled SQL developer with python scripting experience in the healthcare industry. The ideal candidate should have several years of experience in data analysis and data modeling using SQL, Python, and data visualization tools such as PowerBI, SSRS, Qlik. Experience with CMS (Center for Medicare and Medicaid) and Value Based Care programs is highly preferred with an emphasis on knowledge with claims data (CCLF) and electronic medical records (EMRs). Key Responsibilities: Estimate impact with rigorous retrospective analyses (LOS, readmissions, mortality, reimbursement). Contribute into building end-to-end data modeling of clinical and claims data. Productionize efficient, reusable, and reliable SQL code to meet business requirements and help to define technical requirements as needed. Develop, test, and maintain reporting in Power BI driven from electronic medical records data, claims data, HIE data, Salesforce data and other external data sources. Collaborate with cross-functional teams to understand and define, design, and deliver new reports, dashboards, features and enhancements. Create comprehensive reports and data visualizations using Power BI to support decision-making processes. Ensure the performance, quality, and responsiveness of reporting tools. Maintain up-to-date knowledge of emerging technologies and industry trends. Participate in code reviews and contribute to a culture of continuous improvement. Ensure compliance with healthcare regulations and standards, including HIPAA. Required Qualifications: Bachelor's degree in computer science, Information Technology, or a related field, or equivalent practical experience. Four plus years of experience in working with Medicare Shared Savings Program Strong understanding of claims and EMR data structures. Proficiency in back-end technologies such as Node.js, Python, Ruby, Java, or .NET. Experience with database management systems such as Azure, SQL, and MySQL. Expertise in report writing and data visualization using Power BI in the healthcare industry. Strong understanding of healthcare regulations and standards. Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills. Ability to work in the Costa Mesa office 1-2 days per week (at discretion of hiring manager). Preferred Qualifications: Strong understanding of CMS and Value Based Care programs such as MSSP. Good understanding with Azure, MySql, MSSQL and Data Factory. Knowledge of healthcare interoperability standards (e.g., HL7, FHIR). Experience with DevOps practices and tools (e.g., Docker, Kubernetes, Jenkins, Github). Familiarity with Agile development methodologies. Familiar with LLM and AI technology solutions in healthcare.
    $107k-151k yearly est. 25d ago
  • Sr. Hadoop/BigData Consultant

    Jobsbridge

    Data engineer job in Santa Ana, CA

    Hello, Greetings from Jobsbridge! Jobsbridge, Inc. is a fast growing Silicon Valley based I.T staffing and professional services company specializing in Web, Cloud & Mobility staffing solutions. Be it core Java, full-stack Java, Web/UI designers, Big Data or Cloud or Mobility developers/architects, we have them all. Job Description Minimum 10 years in the IT profession Minimum 3 years of hands on experience with Data Management using Big Data Technologies Minimum 5 years of experience creating DM / DW architectures Experience with various data intake, staging and integration methods and tools Experience delivering against long-term, strategic technology roadmaps Success leading multiple projects using the Agile methodology High accountability with a demonstrated ability to deliver Demonstrated ability to communicate clearly and concisely ideas (both verbally and in writing) to executives and technologists Relevant Bachelor's degree or equivalent experience Need to be strong in Hadoop Need to be strong technically Strong communication GC/CI Strong with Spark Qualifications Hadoop, Bigdata, spark. Additional Information Only GC/Citizen,OPT,EAD,H4
    $90k-124k yearly est. 60d+ ago
  • Data Engineer

    Kajabi 4.0company rating

    Data engineer job in Newport Beach, CA

    Data Engineer - Platform About Us Kajabi is in the middle of a once-in-a-decade transformation. After fifteen years, our founders returned to rebuild Kajabi with the same speed, hunger, and grit that sparked the original movement. We're not a corporate SaaS company trying to play it safe - we're a team of builders rewriting the future of the expert economy. Millions of people around the world rely on Kajabi to share what they know and change lives because of it. Our Heroes aren't “customers”… they're everyday entrepreneurs using Kajabi to build freedom for themselves and impact for others. If you want to be part of a company moving fast, raising the bar, and building something that actually matters - welcome in. Your Impact This role will help shape Kajabi's next generation data platform capabilities. Your work will enable: A trustworthy and scalable internal analytics platform that empowers teams across Kajabi. Production-grade analytics data for our customers, enriching the core Kajabi products. This is a high-impact role for someone who thrives in system design, distributed data engineering, and platform reliability. What You'll Do Design, build, and maintain scalable components of Kajabi's data platform to support ingestion, transformation, observability, and analytics. Develop and operate containerized services using Docker, and contribute to deployment workflows leveraging Kubernetes (EKS) and Helm. Provision and manage infrastructure using Terraform within AWS and other providers. Build and scale ingestion pipelines using tools such as Airbyte, Rudderstack, and Kafka. Develop data pipelines and modeling workflows in Snowflake with DBT, ensuring performance, reliability, and best practices. Design and scale real-time analytics datasets in ClickHouse to support high-concurrency production workloads. Build Python services and tooling to automate, orchestrate, and integrate data processing. Contribute to CI/CD workflows using GitHub Actions. Use Datadog for monitoring, observability, and troubleshooting distributed systems. Partner closely with production engineering to ensure data platform infrastructure aligns with Kajabi's broader engineering standards. Apply strong data governance and data quality practices across pipelines and platform components. Participate in planning and execution workflows; familiarity with Shape Up and/or Agile methodologies is a plus. What You Bring Strong software engineering foundation with experience owning and operating infrastructure in a production environment. Proficient with SQL and Python. Deep experience with Docker and containerization. Experience deploying services using Kubernetes (EKS) and Helm. Experience with Terraform for cloud resource provisioning. Extensive AWS experience, including S3, DynamoDB, Lambda, and Glue. Experience with Snowflake. Experience building pipelines and transformations with DBT. Experience with Airbyte, Rudderstack, Kafka, and ClickHouse. Experience with Datadog for observability and troubleshooting. Experience with CI/CD using GitHub Actions. Strong troubleshooting abilities for complex distributed systems. Excellent written communication skills with clear, structured documentation abilities. High level of organization and ability to collaborate effectively across engineering teams. Understanding of data governance and data quality best practices. Kajabi Team Benefits Package Competitive full-time salary + bonus + equity eligibility Full medical, dental, and vision (company-paid for you + family) 401(k) with 6% match Flexible PTO Fitness + wellness perks Mental health resources In-office lunches, collaboration days, and leadership growth opportunities In-Office Requirement Statement We let the type of work you do guide the collaboration style. That means we're not always working in an office, but we continue to gather for key moments of collaboration and connection. This role will need to be in the office for in-person collaboration 2-3 times a quarter and therefore is best situated in the Western Time zones. This role is remote, but we're also happy to support relocation for exceptional candidates who wish to work from our Newport Beach, CA office. If this is of interest, please let us know when we connect! Pay Range At Kajabi we believe the workplace should be equitable, inclusive, and inspiring for every employee. In an effort to provide greater transparency, we are sharing the base salary range for this position. The position is also eligible for a bonus. Final salary is based on a number of factors including location, travel, relevant prior experience, or particular skills and expertise. US based applicants only. $108,750 - $145,000 + bonus How to Apply If you're ready to design and operate the next generation of Kajabi's data platform-and you want to build systems that directly power our product and internal analytics-submit your application below. We're looking for engineers who set a high technical bar, think rigorously, and can deliver reliable, scalable infrastructure in a fast-moving environment. Apply now. Kajabi LLC is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, gender expression, national origin, disability, age, veteran status, or any other basis protected by applicable law.
    $111k-157k yearly est. Auto-Apply 8d ago
  • Lead Data Engineer- Investment Data

    Pacific Lifecorp

    Data engineer job in Newport Beach, CA

    Providing for loved ones, planning rewarding retirements, saving enough for whatever lies ahead - our policyholders count on us to be there when it matters most. It's a big ask, but it's one that we have the power to deliver when we work together. We collaborate and innovate - pushing one another to transform not just Pacific Life, but the entire industry for the better. Why? Because it's the right thing to do. Pacific Life is more than a job, it's a career with purpose. It's a career where you have the support, balance, and resources to make a positive impact on the future - including your own. Unlock the Power of Data at Pacific Life. We're seeking talented Lead Data Engineer to join our Pacific Life Investments Data Team onsite in Newport Beach, CA. We are looking for self-starters to help shape the future of data engineering and drive data-driven success. As a Lead Data Engineer you'll move Pacific Life, and your career, forward by accelerating our data initiatives by bringing modern technical solutioning forward. You will fill a new role that sits on the Investments data team in the technology organization. Your colleagues will include scrum masters and data analyst and fellow Data, AI, Governance, and QA professionals. Join our highly collaborative, innovative team. How you'll help move us forward: Partner with data architects, analysts, engineers, and stakeholders to understand data requirements and deliver solutions. Help build scalable products with robust security, quality and governance protocols. Create low-level design artifacts, including mapping specifications. Build scalable and reliable data pipelines to support data ingestions (batch and /or streaming) and transformation from multiple data sources using SQL, AWS, Snowflake, and data integration technologies. Create unit/integration tests and implement automated build and deployment. Participate in code reviews to ensure standards and best practices. Deploy, monitor, and maintain production systems. Use the Agile Framework to organize, manage and execute work. Demonstrate adaptability, initiative and attention to detail through deliverables and ways of working. The experience you bring: Bachelor's degree in Computer Science, Data Science or Statistics 7+ years of experience in analysis, design, development, and delivery of data 7+ years of experience and proficiency in SQL, ETL, ELT, leading cloud data warehouse technologies, data transformation, and data management tools Understanding of data engineering best practices and data integration patterns 2+ years of experience with DevOps and CI/CD 1+ years of experience (not just POC) in using Git and Python Agile Scrum work experience Effective communication & facilitation; both verbal and written Team-Oriented: Collaborating effectively with team and stakeholders Analytical Skills: Strong problem-solving skills with ability to breakdown complex data solutions What makes you stand out: Investments or FINTECH domain knowledge preferred Strong Data analysis skills and /or data mining experience Experience with one or more Integration tools (Matillion, Informatica, SQL SSIS, DBT) Experience with Snowflake and DBT Works independently with minimal guidance. You can be who you are. We are committed to a culture of diversity and inclusion that embraces the authenticity of all employees, partners and communities. We support all employees to thrive and achieve their fullest potential. What's life like at Pacific Life? Visit Instagram.com/lifeatpacificlife #LI-DW1 Benefits start Day 1. Your wellbeing is important. We're committed to providing flexible benefits that you can tailor to meet your needs. Whether you are focusing on your physical, financial, emotional, or social wellbeing, we've got you covered. • Prioritization of your health and well-being including Medical, Dental, Vision, and a Wellbeing Reimbursement Account that can be used on yourself or your eligible dependents • Generous paid time off options including Paid Time Off, Holiday Schedules, and Financial Planning Time Off • Paid Parental Leave as well as an Adoption Assistance Program • Competitive 401k savings plan with company match and an additional contribution regardless of participation. Base Pay Range: The base pay range noted represents the company's good faith minimum and maximum range for this role at the time of posting. The actual compensation offered to a candidate will be dependent upon several factors, including but not limited to experience, qualifications and geographic location. Also, most employees are eligible for additional incentive pay. $134,280.00 - $164,120.00 Your Benefits Start Day 1 Your wellbeing is important to Pacific Life, and we're committed to providing you with flexible benefits that you can tailor to meet your needs. Whether you are focusing on your physical, financial, emotional, or social wellbeing, we've got you covered. Prioritization of your health and well-being including Medical, Dental, Vision, and Wellbeing Reimbursement Account that can be used on yourself or your eligible dependents Generous paid time off options including: Paid Time Off, Holiday Schedules, and Financial Planning Time Off Paid Parental Leave as well as an Adoption Assistance Program Competitive 401k savings plan with company match and an additional contribution regardless of participation EEO Statement: Pacific Life Insurance Company is an Equal Opportunity /Affirmative Action Employer, M/F/D/V. If you are a qualified individual with a disability or a disabled veteran, you have the right to request an accommodation if you are unable or limited in your ability to use or access our career center as a result of your disability. To request an accommodation, contact a Human Resources Representative at Pacific Life Insurance Company.
    $134.3k-164.1k yearly Auto-Apply 28d ago
  • Data Engineer

    Evolus, Inc. 4.2company rating

    Data engineer job in Newport Beach, CA

    Evolus (NASDAQ: EOLS) is a performance beauty company with a customer-centric approach focused on delivering breakthrough products. We are seeking an experienced and driven Data Engineer to join our Information Technology team reporting to the Executive Director, Data Engineering. We are looking for an enthusiastic person with strong data engineering and analytical skills to join our team and support data and analytics initiatives across Evolus's global business functions. The Data Engineer's role is to integrate data from a variety of internal and external sources into a common warehouse data model. This is a technical role that involves building and maintaining ELT data pipelines, recommend and implement appropriate data models and be comfortable in a DataOps environment. We are looking for someone with a consultative mindset to be able to interact with business and analytics team and help drive value to business. Data ecosystem is an evolving space, and we expect and encourage innovation and thought leadership. If you join our team, you will be working on some of the most exciting opportunities and challenges we face, with a team that values growth, recognition, and camaraderie. If you are looking for an opportunity to exhibit your knowledge and technical abilities in a unique environment, then look no further! In this role, you will be challenged to drive the success of Evolus in an effort to build a brand like no other. Essential duties and responsibilities where you'll make the biggest impact… * Collaborate with team members to collect business requirements, define successful analytics outcomes, and design data models * Design, develop Snowflake data warehouse using dbt or any other ELT tool to extend the Enterprise Dimensional Model * Contribute to planning and prioritization discussion * Break down and architect the most complex data engineering problems to deliver insights that meets and ideally exceeds business needs * Own and deliver solutions - from ingestion of sources to data products for end user consumption, from conceptual iteration to production support * Deliver and ensure sustained performance of all data engineering pipelines and remediate where required * Own source code management, documentation (technical and end user), and release planning for data engineering products; lean-in to DataOps, DevOps, and CI/CD to deliver reliable, tested, and scalable functionality through automation * Identify and proactively manage risks to the data engineering platform * Office location - Newport Beach. Hybrid schedule: Monday and Friday remote; Tuesday - Thursday onsite * Other duties as assigned Qualifications and Skills You'll Bring to the Team… * Bachelor's degree required * 6+ years of experience in enterprise data solutions * 4+ years in cloud-based data warehousing with strong SQL and Snowflake experience * Experience building data pipelines using python and data orchestration tools like Apache Airflow * Data extraction/transformation/orchestration tools such as Fivetran, dbt, Datafold, Prefect, Kafka, Stitch and Matillion * Deep understanding of data analysis, data modeling for visualization, and reporting * Experience in DataOps and git or Azure DevOps and CI/CD pipelines * Demonstrated experience with one or more of the following business subject areas: healthcare, marketing, finance, sales, product, customer success or engineering * Experience performing root cause analysis for production issues and identify opportunities for improvement * Passionate about writing clean, documented, and well-formed code and perform code reviews * Keen attention to detail in planning, organization, and execution of tasks, while still seeing the big picture and understanding how all the pieces fit together and affect one another * Excellent communication and interpersonal skills Compensation & Total Rewards This is an Exempt position. The expected pay range for this position is $114,000 to $142,000. You are eligible for an annual bonus compensation plan, terms and conditions apply. Your actual base salary will be determined on a case-by-case basis and may vary based on a number of considerations including but not limited to role-relevant knowledge and skills, experience, education, geographic location, certifications, and more. We offer more than just a paycheck, and your base salary is just the start! Stay happy and healthy with our competitive suite of medical, dental and vision benefits to help you feel your best and be your best. We also provide those benefits you shouldn't have to worry about, from employer covered life insurance to short-term disability. Take advantage of the 401k match offered by Evolus and let us invest in your future. You may also be eligible for new hire equity and long-term incentives in the form of RSUs, stock options, and/or discretionary bonuses. We offer mental health and wellbeing resources for you to develop skills to find your calm, boost your confidence, and show up as your best self in work and life. Travel or relax and come back feeling refreshed with our flexible paid time off program for exempt employees and a paid time off accrual plan for non-exempt employees. Did we mention the holiday soft closure between the Christmas and New Years holidays? We have that, too. Additional perks include regularly catered team meals at our Evolus Headquarters, a fully stocked kitchen (Kombucha & Coffee included), and the opportunity to join an organization where our values of Grit, Impact, Fun, and Transparency are displayed daily. Evolus takes pride in being a company on the forefront of innovation, while being committed to conducting its business with the highest degrees of integrity, professionalism, and social responsibility. We are also committed to complying with all laws and regulations that apply to our business. Employee welfare is no different. Here at Evolus, we don't just work together, we've built a culture of inclusion! Because of this, you'll find yourself immersed in an environment that not only promotes respect, collaboration and team building, but a community too. And that's just the tip of the iceberg. Join our team and see for yourself! EOE M/F/D/V. For more information, please visit our website at ************** or reach out to ******************.
    $114k-142k yearly Auto-Apply 37d ago
  • Cloud Data & AI Engineer

    OSI Digital 4.6company rating

    Data engineer job in Irvine, CA

    Job DescriptionJob Title: Cloud Data & AI Engineer Employment: Full-time At OSI Digital Inc, we accelerate our client's digital transformation journey by delivering modern data solutions, enabling them to unlock the full potential of their data with scalable cloud platforms, intelligent analytics, and AI-driven solutions. With deep expertise across data engineering, cloud platforms, advanced analytics, and AI/ML, our teams bring both technical mastery and business acumen to every engagement. We don't just implement tools-we build scalable, future-ready solutions that drive measurable outcomes for our clients.Role Summary:We are seeking a highly skilled and results-driven Modern Cloud Data and AI Engineer with a strong background in modern cloud data architecture, specifically on Snowflake, and hands-on experience in developing Data solutions in Power BI, implementing AI Solutions. The ideal candidate combines strong data engineering, integration, and BI expertise with hands-on AI project execution, supporting OSI's reputation for high-impact consulting in cloud and digital transformation spaces and will be a strong communicator, capable of implementing projects from the ground up. Key Responsibilities: Lead the design, development, and implementation of highly scalable and secure data warehouse solutions on Snowflake, including schema design, data loading, performance tuning, and optimizing cloud costs. Design and build robust, efficient data pipelines (ETL/ELT) using advanced data engineering techniques. This includes hands-on experience in data integration via direct APIs (REST/SOAP) and working with various integration tools (e.g., Talend, stitch, Fivetran, or native cloud services). Develop and implement high-impact visual analytics and semantic models in Power BI. Apply advanced features such as DAX, Row-Level Security (RLS), and dashboard deployment pipelines. Proficiency in Python/R, familiarity with ML frameworks (scikit-learn, TensorFlow, PyTorch), experience with MLOps concepts, and deploying models into a production environment on cloud platforms. Develop and deploy AI/ML solutions using Python, Snowpark, or cloud-native ML services (AWS Sagemaker, Azure ML). Exposure to LLM/GenAI projects (chatbot implementations, NLP, recommendation systems, anomaly detection) is highly desirable. Implement and manage data solutions utilizing core services on at least one major cloud platform (AWS or Azure). Demonstrate exceptional communication and articulation skills to engage with clients, gather requirements, and lead project delivery from ground up (inception to final deployment). Required Qualifications: Minimum of 4 years of professional experience in data engineering, consulting, and solution delivery. Bachelor's degree in computer science, Engineering, or a related technical field. A master's degree in a relevant field is highly preferred. Strong, hands-on experience in end-to-end Snowflake project implementation. Any professional certifications in snowflake preferred. Expertise in designing, building, and maintaining ELT/ETL pipelines and data workflows, with a solid understanding of data warehousing best practices. Hands-on experience implementing dashboards in Power BI, including DAX and RLS. Professional certifications in Power BI are preferred. Proficiency in Python, with demonstrable experience deploying at least one AI/ML project (e.g., Snowpark, Databricks, SageMaker, Azure ML) including feature engineering, model deployment, and MLOps practices. Experience with machine learning frameworks such as scikit-learn, TensorFlow, or PyTorch, and hands-on exposure to production deployments. Familiarity with projects involving LLM/Generative AI (e.g., chatbots, NLP, recommendation systems, and anomaly detection). Hands-on experience working with cloud platforms, specifically AWS or Azure. Excellent verbal and written communication, presentation, and client-facing consulting skills, with proven track record of successfully leading projects from inception. Preferred (Added Advantage) Qualifications: Experience with Tableau or other leading BI tools. Working knowledge of Databricks (e.g., Spark, Delta Lake). Experience or strong understanding of Data Science methodologies and statistical modeling. Relevant industry certifications, including Power BI, Snowflake, Databricks and AWS/Azure Data/AI credentials. Powered by JazzHR AIsdZ6e6Gx
    $108k-153k yearly est. 29d ago
  • Senior Data Engineer

    Airspace 3.5company rating

    Data engineer job in Carlsbad, CA

    Job Description Company Introduction: Airspace is a tech-enabled freight forwarder that's redefining how the world's most critical packages are delivered. Headquartered in Carlsbad, California, Airspace has employees who are based around the world. Our European headquarters is in Amsterdam, The Netherlands. As a recognized leader in AI and machine learning, our team leverages data and patented technology to coordinate logistics across a global network of drivers and airlines. Our goal is to deliver those packages that are truly mission-critical in a way that is faster, more transparent, more secure, and more accountable than ever before. The items we deliver range from organs for transplant, to parts for critical machinery including grounded aircraft and highly sensitive components such as semiconductors. Airspace has been rated one of America's best Startup Employers, listed as one of CNBC's Disruptor 50 companies, and featured as an Innovation and Disruption leader by CBS News. Airspace has the support of leading investors such as Telstra Ventures, HarbourVest Partners, Defy Partners, DBL Partners, and Scale Ventures. To date the company has raised more than $140m. The company is growing rapidly and serving more places around the world than ever before. We are looking for passionate, motivated individuals who want to make an IMPACT every day to help us execute on our mission of reshaping the world of time-critical logistics. About the Role As Senior Data Engineer, you will be the owner of all data infrastructure and ETL processes at Airspace. You will be responsible for ensuring our data pipelines are performant, scalable, and robust - and that they support our business needs today and as we grow. This role is not just about building great infrastructure - it's about being a great teammate. We're looking for someone who brings technical excellence and positive energy to the team, who can lead without ego, teach without condescension, and collaborate with humility and clarity. You'll be the technical thought leader for data engineering: advising on architectural decisions, executing high-impact refactors, and setting the roadmap for how data flows through our systems. This is a high-impact, high-autonomy role reporting directly to the Director of Data Science and AI, with the opportunity to shape the future of our data platform. What You'll Do Own and evolve our data infrastructure and pipelines Design, build, and maintain reliable ETL pipelines that ingest data from internal application postgres databases and external SaaS platforms (e.g. Salesforce, Twilio, Zendesk) Manage and scale our orchestration layer built in Airflow Ensure reliability, consistency, and performance across our data systems through strong engineering practices and operational discipline Oversee and optimize how data flows into and through our warehouse layer (Snowflake), ensuring transformations (via dbt) integrate cleanly into upstream and downstream systems Be a technical leader and strategic architect Set a high bar for technical excellence, promoting clean architecture, code quality, and maintainability through thoughtful design, hands-on coding, and code reviews Drive architectural improvements and lead system-level refactors to improve performance, scalability, and efficiency Continuously evaluate and recommend tools, frameworks, and methodologies that enhance platform capabilities and developer velocity Level up our data function across the full lifecycle Act as a trusted partner to stakeholders in Product, Data Science, and Infrastructure, helping shape technical roadmaps that align with business priorities Identify and resolve systemic bottlenecks in our data workflows, and proactively implement process improvements Mentor teammates and foster a collaborative, inclusive, and high-performing engineering culture What We're Looking ForRequired Skills & Experience 6+ years of experience in data engineering or backend infrastructure roles Expert level Python engineering skills Thorough knowledge and understanding of Airflow (familiarity with Astronomer or Google Cloud Composer is a plus) Experience managing and optimizing data infrastructure built on Snowflake (preferred) or BigQuery, including warehouse design, performance tuning, and cost efficiency Familiarity with dbt core, including how it fits into modern data pipelines and transformation layers Solid understanding of SQL (especially in the context of large-scale warehouse environments) Proficiency in Git, version control, and collaborative development workflows Comfort working with CI/CD pipelines and deployment automation Experience ingesting data from both internal and third-party systems Soft Skills We Value Collaborative and approachable: You enjoy partnering across disciplines, teaching and supporting others, and explaining complex technical ideas clearly and without ego. Feedback-oriented: You take feedback as a way to grow and improve, and give feedback in a constructive, respectful way. Bias for action: You're proactive and solutions-oriented, but know when to pause and bring others in. Growth mindset: You're curious, open to learning, and comfortable not always being the smartest person in the room. Why Join Airspace Be part of a mission-driven company solving real-world, time-critical problems Own and lead the data engineering function with significant autonomy Collaborate with a high-performing Data Science and AI team Competitive salary, equity, and benefits Opportunity to shape the future of our data platform and influence how data enables every function of the business Compensation: Salary Range: $130-180K High-quality health, dental, and vision plan options Unlimited PTO 401K with company match Core Values: We are One Team. We believe we all accomplish more when we are working together. We make an Impact. We are determined to have a positive influence on our environment, our customers, our industry, and our world. We are Passionate. We care deeply about our mission and are not afraid to raise the bar. We are Transparent. We pride ourselves on having open, honest, and sincere communication with our team and customers. We are Innovative. We never settle and are always striving to improve our product, service, and ourselves. About Airspace: From life-saving organs to essential machinery components, Airspace is trusted by the world's largest companies and most critical healthcare organizations to move their most time-sensitive shipments on time, every time. Our proprietary AI-powered platform is the most advanced of its kind- awarded and protected by multiple patents, it provides speed, reliability, and transparency unrivaled in time-critical logistics. We are thinkers, builders, and doers; from building and deploying AI in the world to assembling a world-class operations team, Airspace is on a hypergrowth trajectory while remaining hyper-focused on the needs of our customers and team members. With offices in the United States in Carlsbad, CA and in Europe in Amsterdam, Frankfurt, Stockholm, and London, we are rapidly scaling into new markets and industries while continuing to innovate and maximize value for our customers. Backed by leading investors including Telstra, HarbourVest, Prologis, Qualcomm, Defy, and others, Airspace has raised $140M to date. Join our team of 300+ technologists, futurists, and industry veterans as we work as One Team to revolutionize time-critical logistics. Airspace is proud to be an Equal Employment Opportunity and Affirmative Action employer. We do not discriminate based upon race, religion, color, national origin, gender (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics. Additionally, Airspace participates in the E-Verify program for all locations. For this role the acquisition of recruitment agencies is not appreciated, thank you for your understanding.
    $130k-180k yearly 7d ago
  • Senior Data Engineer

    Green Street Advisors 4.5company rating

    Data engineer job in Newport Beach, CA

    Interested in working at a fast-paced growing FinTech company? Have a desire to work with the latest technologies including Python, Amazon Web Services (AWS), and custom-built API's? If so, we are excited to have you join the Green Street technology team. We are a group of innovative technologists working together to fuel the growth of one of the fastest growing technology companies in the Commercial Real Estate (CRE) Space. We are looking to hire a talented, hands-on data engineer, who is interested in working on our cloud-based solutions for pipelining public and private data from multiple sources throughout our systems. You will be working on an Agile team dedicated to parsing and processing, aggregating, and enriching the latest CRE data used by our clients and an internal team of researchers to provide the most comprehensive view of the Commercial Real Estate in the world. What's it like working with our team of dedicated technologists? We are passionate about great Agile teams, engineering excellence through paired programming, automated testing, close collaboration with each other, and a highly transparent tech organization. You'll have multiple opportunities to work on interesting projects, move around teams, and engage with some of the brightest engineering minds anywhere. We love great engineers, and we are excited to get to know you better. Please share your resume today! Job Responsibilities Architect and build database schema, data ingestion models, ETLs, and testing to uphold data integrity and availability Develop, maintain, and optimize current data architecture and pipelines, while maintaining documentation Provide leadership and mentorship to junior team members through training, pair programming, and code reviews Collaborate with product, research, operations, and front-end teams to build innovative solutions to achieve business goals Perform analysis on large aggregated (e.g. demographic, geographic) datasets Candidate Profile Excellent Python Programming knowledge Excellent scripting skills and an extensive knowledge of SQL with ability to write advanced queries to manipulate datasets Knowledge of and passion for database design, optimization techniques, and warehousing Strong leadership and interpersonal skills Ability to quickly learn, internalize complex pipelines, and efficiently troubleshoot bugs Experience with AWS infrastructure and CI / CD, and interest in front end technologies is a plus Prior experience developing scalable APIs (REST and GraphQL) is a plus Requirements A Bachelor's degree in Computer Science / Engineering or comparable experience Excellent programming skills with minimum 5 years of experience in Python (e.g. Pandas, SQLAlchemy, Flask, FastAPI, etc.) Extensive database skills with minimum 5 years of experience in either MySQL, SQL, or PostgreSQL Proficiency with Git is required; previous experience with the Atlassian product family (Confluence and Jira) is a plus Must enjoy collaborative problem solving, have excellent communication skills, and demonstrate attention to detail Prepared to work independently in a fast-paced, agile environment with the ability to multitask Track record of keeping up with new cloud technologies, languages, standards, and practices Prior knowledge of finance, real estate data, or mathematics is a plus Compensation, Benefits and Work Authorization In addition to the posted base salary range, this position is eligible for a performance bonus and benefits (subject to eligibility requirements) listed here. Total compensation is based on several factors including, but not limited to, type of position, location, education level, work experience, and certifications. This information is applicable for all full-time positions. Green Street will not sponsor or transfer employment work visas for this position. Applicants must be currently authorized to work in the United States without the need for visa sponsorship now or in the future. Company Overview and EEOC/Diversity Green Street is a forward-thinking real assets company at the forefront of transforming the commercial real estate market with cutting-edge predictive analytics, data-driven insights, and actionable intelligence. With over 40 years of expertise, Green Street empowers investors, lenders, banks, and industry stakeholders across the U.S., Canada, Europe and Asia to make optimized investment and strategic decisions. To learn more, please visit ******************** The success of Green Street is directly attributable to the strength of our people. A diverse and inclusive work environment where top talent can thrive, think freely and offer different perspectives makes our insights even stronger. We're building a company culture where differences are celebrated and valued. Green Street is an Equal Opportunity Employer Green Street does not discriminate on the basis of race, sex, color, religion, age, national origin, marital status, disability, veteran status, genetic information, sexual orientation, gender identity or any other reason prohibited by law in provision of employment opportunities and benefits. Pay Range USD $145,000.00 - USD $155,000.00 /Yr. Incentive Performance Bonus + Incentive Performance Bonus
    $145k yearly Auto-Apply 60d+ ago
  • Sr. Big Data Engineer - Data Infrastructure

    TP-Link Systems 3.9company rating

    Data engineer job in Irvine, CA

    ABOUT US: Headquartered in the United States, TP-Link Systems Inc. is a global provider of reliable networking devices and smart home products, consistently ranked as the world's top provider of Wi-Fi devices. The company is committed to delivering innovative products that enhance people's lives through faster, more reliable connectivity. With a commitment to excellence, TP-Link serves customers in over 170 countries and continues to grow its global footprint. We believe technology changes the world for the better! At TP-Link Systems Inc, we are committed to crafting dependable, high-performance products to connect users worldwide with the wonders of technology. Embracing professionalism, innovation, excellence, and simplicity, we aim to assist our clients in achieving remarkable global performance and enable consumers to enjoy a seamless, effortless lifestyle. KEY RESPONSIBILITIES Design and build scalable data pipeline: Develop and maintain high performance and large scale data ingestion and transformation, including ETL/ELT processes, data de-identification, and security management. Data orchestration and automation: Develop and manage automated data workflows using tools like Apache Airflow to schedule pipelines, manage dependencies, and ensure reliable, timely data processing and availability. AWS integration and cloud expertise: Build data pipelines integrated with AWS cloud-native storage and compute services, leveraging scalable cloud infrastructure for data processing. Monitoring and data quality: Implement comprehensive monitoring, logging, and alerting to ensure high availability, fault tolerance and data quality through self healing strategies and robust data validation processes. Technology innovation: Stay current with emerging big data technologies and industry trends, recommending and implementing new tools and approaches to continuously improve data infrastructure. Technical leadership: Provide technical leadership for data infrastructure teams, guide architecture decisions and system design best practices. Mentor junior engineers through code reviews and knowledge sharing, lead complex projects from concept to production, and help to foster a culture of operational excellence. Requirements REQUIRED QUALIFICATIONS Experience requirements: 5+ years in data engineering, software engineering, or data infrastructure with proven experience building and operating large scale data pipelines and distributed systems in production, including terabyte scale big data environments. Programming proficiency: Strong Python skills for building data pipelines and processing jobs, with ability to write clean, maintainable, and efficient code. Experience with Git version control and collaborative development workflows required. Distributed systems expertise: Deep knowledge of distributed systems and parallel processing concepts. Proficient in debugging and performance tuning large scale data systems, with understanding of data partitioning, sharding, consistency, and fault tolerance in distributed data processing. Big data frameworks: Strong proficiency in big data processing frameworks such as Apache Spark for batch processing and other relevant batch processing technologies. Database and data warehouse expertise: Strong understanding of relational database concepts and data warehouse principles. Workflow Orchestration: Hands-on experience with data workflow orchestration tools like Apache Airflow or AWS Step Functions for scheduling, coordinating, and monitoring complex data pipelines. Problem solving and collaboration: Excellent problem solving skills with strong attention to detail and ability to work effectively in collaborative team environments. PREFERRED QUALIFICATIONS Advanced degree: Master's degree in Computer Science or related field providing strong theoretical foundation in large scale distributed systems and data processing algorithms. Modern data technology: Exposure to agentic AI patterns, knowledge base systems, and expert systems is a plus. Experience with real-time streaming processing frameworks like Apache Kafka, Apache Flink, Apache Beam, or pub/sub real-time messaging systems is a plus. Advance database and data warehouse expertise: Familiar with diverse database technologies in addition to relational, such as NoSQL, NewSQL, key value, columnar, graph, document, time series databases. Ability to design and optimize schemas/data models for analytics use cases, with experience in modern data storage solutions like data warehouses (Redshift, BigQuery, Databricks, Snowflake). Additional programming languages: Proficiency in additional languages such as Java or Scala is a plus. Cloud and infrastructure expertise: Experience with AWS cloud platforms and hands on skills in infrastructure as code (SDK, CDK, Terraform) and container orchestration (Docker/Kubernetes) for automated environment setup and scaling. Benefits Salary Range: $150,000 - $200,000 Free snacks and drinks, and provided lunch on Fridays Fully paid medical, dental, and vision insurance (partial coverage for dependents) Contributions to 401k funds Bi-annual reviews, and annual pay increases Health and wellness benefits, including free gym membership Quarterly team-building events At TP-Link Systems Inc., we are continually searching for ambitious individuals who are passionate about their work. We believe that diversity fuels innovation, collaboration, and drives our entrepreneurial spirit. As a global company, we highly value diverse perspectives and are committed to cultivating an environment where all voices are heard, respected, and valued. We are dedicated to providing equal employment opportunities to all employees and applicants, and we prohibit discrimination and harassment of any kind based on race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state, or local laws. Beyond compliance, we strive to create a supportive and growth-oriented workplace for everyone. If you share our passion and connection to this mission, we welcome you to apply and join us in building a vibrant and inclusive team at TP-Link Systems Inc. Please, no third-party agency inquiries, and we are unable to offer visa sponsorships at this time.
    $150k-200k yearly Auto-Apply 60d+ ago
  • Principal Data Engineer

    Americor

    Data engineer job in Irvine, CA

    Americor is a leading provider of debt relief solutions for people of all backgrounds. We offer various services to help our clients achieve financial freedom, including debt consolidation loans, debt settlement, and credit repair. Our dedication to others sets us apart - not only as a company but as a community of employees who support each other's personal and professional growth. Recognized as a ‘Top Place to Work' and ‘Best Company' for our outstanding service and commitment to excellence. We are currently seeking a Principal/Lead Analytics Engineer to join our rapidly growing team. Specific job title and offer to align with the qualifications of the candidate. *Applicants must be authorized to work for ANY employer in the U.S. We are unable to sponsor or take over sponsorship of an employment Visa at this time Compensation: $120,000-$150,000 annually, dependent on experience. Location: Irvine, CA. Hybrid (In-office every Tuesday). Schedule: Monday-Friday, with weekend on-call availability required as needed. Responsibilities: Own bridge between data engineering, data science, and business intelligence for data analytics. Build and maintain ETL/ELT processes using Airflow, dbt, and custom Python scripts. Design and implement scalable, robust, and efficient data pipelines. Demonstrate and ensure best practices for data security, data validation and data quality are in place and maintained. Stay current with the latest trends and developments in data engineering; champion internal adoption and change management of the latest practices. Other duties as assigned. Requirements: 7+ years of relevant experience. Advanced proficiency with Python. Advanced proficiency with SQL. Experience with dbt core cli preferred. Advanced skills with database architecture and data pipelining. Experience utilizing cloud services to develop ETL/ELT pipelines; AWS Lambda, S3, etc., preferred. Experience using orchestrator, scheduling, and monitoring tools; Airflow and cron preferred. Experience leveraging git for codebase source control; experience developing GitHub/Bitbucket CI/CD pipelines preferred. Experience with Docker and docker-compose preferred. Proven ability to position oneself to advise and teach across a data organization. Excellent verbal, written, and interpersonal communication skills. Education: Bachelor's degree in a quantitative field or equivalent experience. Master's degree preferred. Company Benefits: Ongoing training and development Opportunity for career advancement Medical Dental Vision Company Paid Group Life / AD&D Insurance 7 Paid Holidays and 2 Floating Holiday Days to use at will Paid Time Off Flexible Spending/HSA Employee Assistance Program (EAP) 401(k) match Referral Program Americor is proud to be an Equal Opportunity Employer. Americor does not discriminate based on race, color, gender, disability, veteran, military status, religion, age, creed, national origin, sexual identity or expression, sexual orientation, marital status, genetic information, or any other basis prohibited by local, state, or federal law. * Note to Agencies: Americor Funding, Inc. (the “Company”) has an internal recruiting department. Americor Funding Inc. may supplement that internal capability from time to time with assistance from temporary staffing agencies, placement services, and professional recruiters (“Agency”). Agencies are hereby specifically directed NOT to contact Americor Funding Inc. employees directly in an attempt to present candidates. The Company's policy is for the internal recruiting team or other authorized personnel to present ALL candidates to hiring managers. Any unsolicited resumes sent to Americor Funding Inc. from a third party, such as an Agency, including unsolicited resumes sent to a Company mailing address, fax machine, or email address, directly to Company employees, or to the resume database, will be considered Company property. Americor Funding Inc. will NOT pay a fee for any placement resulting from the receipt of an unsolicited resume. Americor Funding Inc. will consider any candidate for whom an Agency has submitted an unsolicited resume to have been referred by the Agency free of any charges or fees. #LI-JR1
    $120k-150k yearly Auto-Apply 60d+ ago

Learn more about data engineer jobs

How much does a data engineer earn in Menifee, CA?

The average data engineer in Menifee, CA earns between $85,000 and $161,000 annually. This compares to the national average data engineer range of $80,000 to $149,000.

Average data engineer salary in Menifee, CA

$117,000
Job type you want
Full Time
Part Time
Internship
Temporary