Post job

Data Engineer jobs at Lilt

- 1750 jobs
  • Data Engineer - Hadoop

    GTN Technical Staffing 3.8company rating

    New York, NY jobs

    Data Engineer - Hadoop Administrator HIGHLIGHTS Direct Hire Compensation: BOE We are seeking a Data Engineer to support Newton, our Data Science R&D compute cluster. This role functions as a Hadoop Administrator embedded within the ML Ops organization, providing hands-on operational support for the platform while partnering directly with data scientists, DevOps, and infrastructure teams. This individual will ensure the health, stability, performance, and usability of the Newton cluster, acting as the primary point of contact for platform support, troubleshooting, and environment optimization. This is a highly collaborative and technical role with room for long-term career progression. Key Responsibilities • Serve as the primary administrator for the Newton Hadoop/Cloudera cluster. • Provide direct support to data scientists experiencing issues with jobs, workloads, dependencies, cluster resources, or environment performance. • Troubleshoot complex Hadoop, Spark, Python, and OS-level issues; drive root cause analysis and implement permanent fixes. • Coordinate closely with DevOps to ensure patching, upgrades, infrastructure changes, and system reliability activities are completed on schedule. • Monitor cluster performance, capacity, and resource utilization; tune and optimize for efficiency and cost. • Manage Hadoop and Cloudera configurations, services, security, policies, and operational health. • Implement automation and scripting to improve operational workflows and reduce manual intervention. • Validate vendor patches, updates, and upgrades and coordinate deployments with DevOps and infrastructure teams. • Maintain documentation, operational runbooks, troubleshooting guides, and environment standards. • Serve as a liaison between Data Science, ML Ops, Infrastructure, and DevOps teams to ensure seamless platform operations. • Support the organization's commitment to protecting the integrity, availability, and confidentiality of systems and data. Required Technical Skills • Strong hands-on experience with Hadoop administration, ideally within Cloudera environments. • Proficiency with Python, particularly for automation and data workflows. • Experience with Apache Spark (supporting jobs, tuning performance, understanding resource usage). • Solid understanding of Linux/Unix systems administration, shell scripting, permissions, networking basics, and OS-level troubleshooting. • Experience supporting distributed compute environments or large-scale data platforms. • Familiarity with DevOps collaboration (patching, upgrades, deployments, incident response, etc.). Required Soft Skills & Competencies • Excellent communication skills with the ability to work directly with data scientists and technical end users. • Ability to coordinate with multiple technical teams (DevOps, Infrastructure, ML Ops). • Strong troubleshooting and problem-solving capabilities. • Ability to manage multiple priorities in a fast-moving environment. Preferred Skills (Nice to Have) • Experience with ML Ops environments or supporting machine learning workflows. • Experience with cluster performance optimization and capacity planning. • Background in distributed systems or data engineering.
    $105k-149k yearly est. 4d ago
  • Data Engineer

    Brooksource 4.1company rating

    New York, NY jobs

    Data Engineer - Data Migration Project 6-Month Contract (ASAP Start) Hybrid - Manhattan, NY (3 days/week) We are seeking a Data Engineer to support a critical data migration initiative for a leading sports entertainment and gaming company headquartered in Manhattan, NY. This role will focus on transitioning existing data workflows and analytics pipelines from Amazon Redshift to Databricks, optimizing performance and ensuring seamless integration across operational reporting systems. The ideal candidate will have strong SQL and Python skills, experience working with Salesforce data, and a background in data engineering, ETL, or analytics pipeline optimization. This is a hybrid role requiring collaboration with cross-functional analytics, engineering, and operations teams to enhance data reliability and scalability. Minimum Qualifications: Advanced proficiency in SQL, Python, and SOQL Hands-on experience with Databricks, Redshift, Salesforce, and DataGrip Experience building and optimizing ETL workflows and pipelines Familiarity with Tableau for analytics and visualization Strong understanding of data migration and transformation best practices Ability to identify and resolve discrepancies between data environments Excellent analytical, troubleshooting, and communication skills Responsibilities: Modify and migrate existing workflows and pipelines from Redshift to Databricks. Rebuild data preprocessing structures that prepare Salesforce data for Tableau dashboards and ad hoc analytics. Identify and map Redshift data sources to their Databricks equivalents, accounting for any structural or data differences. Optimize and consolidate 200+ artifacts to improve efficiency and reduce redundancy. Implement Databricks-specific improvements to leverage platform capabilities and enhance workflow performance. Collaborate with analytics and engineering teams to ensure data alignment across business reporting systems. Apply a “build from scratch” mindset to design scalable, modernized workflows rather than direct lift-and-shift migrations. Identify dependencies on data sources not yet migrated and assist in prioritization efforts with the engineering team. What's in it for you? Opportunity to lead a high-impact data migration initiative at a top-tier gaming and entertainment organization. Exposure to modern data platforms and architecture, including Databricks and advanced analytics workflows. Collaborative environment with visibility across analytics, operations, and engineering functions. Ability to contribute to the foundation of scalable, efficient, and data-driven decision-making processes. EEO Statement: Eight Eleven Group provides equal employment opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, national origin, age, sex, citizenship, disability, genetic information, gender, sexual orientation, gender identity, marital status, amnesty or status as a covered veteran in accordance with applicable federal, state, and local laws.
    $101k-140k yearly est. 1d ago
  • Senior Data Engineer

    Robert Half 4.5company rating

    Los Angeles, CA jobs

    Robert Half is partnering with a well known brand seeking an experienced Data Engineer with Databricks experience. Working alongside data scientists and software developers, you'll work will directly impact dynamic pricing strategies by ensuring the availability, accuracy, and scalability of data systems. This position is full time with full benefits and 3 days onsite in the Woodland Hills, CA area. Responsibilities: Design, build, and maintain scalable data pipelines for dynamic pricing models. Collaborate with data scientists to prepare data for model training, validation, and deployment. Develop and optimize ETL processes to ensure data quality and reliability. Monitor and troubleshoot data workflows for continuous integration and performance. Partner with software engineers to embed data solutions into product architecture. Ensure compliance with data governance, privacy, and security standards. Translate stakeholder requirements into technical specifications. Document processes and contribute to data engineering best practices. Requirements: Bachelor's or Master's degree in Computer Science, Engineering, or a related field. 4+ years of experience in data engineering, data warehousing, and big data technologies. Proficiency in SQL and experience with relational databases (e.g., PostgreSQL, MySQL, SQL Server). Must have experience in Databricks. Experience working within Azure or AWS or GCP environment. Familiarity with big data tools like Spark, Hadoop, or Databricks. Experience in real-time data pipeline tools. Experienced with Python
    $116k-165k yearly est. 4d ago
  • Data Engineer

    Robert Half Recruiting 4.5company rating

    Culver City, CA jobs

    Robert Half is partnering with a well known high tech company seeking an experienced Data Engineer with strong Python and SQL skills. The primary duties involve managing the complete data lifecycle and utilizing extensive datasets across marketing, software, and web platforms. This position is full time with full benefits and 3 days onsite in the Culver CIty area. Responsibilities: 4+ years of professional experience ideally in a combination of data engineering and business intelligence. Working heavily with SQL and programming in Python. Ownership mindset to oversee the entire data lifecycle, including collection, extraction, and cleansing processes. Building reports and data visualization to help advance business. Leverage industry-standard tools for data integration such as Talend. Work extensively within Cloud based ecosystems such as AWS and GCP ecosystems. Requirements: Bachelor's or Master's degree in Computer Science, Engineering, or a related field. 5+ years of experience in data engineering, data warehousing, and big data technologies. Proficiency in SQL and experience with relational databases (e.g., PostgreSQL, MySQL, SQL Server) and NoSQL Technologies. Experience working within GCP environments and AWS. Experience in real-time data pipeline tools. Hands-on expertise with Google Cloud services including BigQuery. Deep knowledge of SQL including Dimension tables and experienced in Python programming.
    $116k-165k yearly est. 4d ago
  • Senior Data Engineer

    Brooksource 4.1company rating

    Indianapolis, IN jobs

    Senior Data Engineer - Azure Data Warehouse (5-7+ Years Experience) Long term renewing contract Azure-based data warehouse and dashboarding initiatives. Work alongside architects, analysts, and researchers to build scalable, auditable, and business-aligned data assets using modern cloud tools and best practices. Key Responsibilities · Design and implement scalable data pipelines using ADF, Databricks, and Azure SQL Server · Apply Medallion architecture principles and best practices for data lake and warehouse design · Collaborate with Data Architects, Analysts, and Researchers to translate business needs into technical solutions · Develop and maintain CI/CD pipelines for data workflows and dashboard deployments · Lead troubleshooting and debugging efforts across ETL, SQL, and cloud environments · Mentor junior team members and promote best practices in data modeling, cleansing, and promotion · Support dashboarding initiatives with Power BI and wireframe collaboration · Ensure auditability, lineage, and performance across SQL Server and Oracle environments Required Skills & Experience · 5-7+ years in data engineering, data warehouse design, and ETL development · Strong expertise in Azure Data Factory, Data Bricks, and Python · Deep understanding of SQL Server, Oracle, Postgres SQL & Cosmos DB and data modeling standards · Proven experience with Medallion architecture and data Lakehouse best practices · Hands-on with CI/CD, DevOps, and deployment automation · Agile mindset with ability to manage multiple priorities and deliver on time · Excellent communication and documentation skills Bonus Skills · Experience with GCP or AWS · Familiarity with Jira, Confluence, and AppDynamics
    $77k-104k yearly est. 3d ago
  • Big Data Engineer

    Kellymitchell Group 4.5company rating

    Santa Monica, CA jobs

    Our client is seeking a Big Data Engineer to join their team! This position is located in Santa Monica, California. Design and build core components of a large-scale data platform for both real-time and batch processing, owning key features of big data applications that evolve with business needs Develop next-generation, cloud-based big data infrastructure supporting batch and streaming workloads, with continuous improvements to performance, scalability, reliability, and availability Champion engineering excellence, promoting best practices such as design patterns, CI/CD, thorough code reviews, and automated testing Drive innovation, contributing new ideas and applying cutting-edge technologies to deliver impactful solutions Participate in the full software development lifecycle, including system design, experimentation, implementation, deployment, and testing Collaborate closely with program managers, product managers, SDETs, and researchers in an open, agile, and highly innovative environment Desired Skills/Experience: Bachelor's degree in a STEM field such as: Science, Technology, Engineering, Mathematics 5+ years of relevant professional experience 4+ years of professional software development experience using Java, Scala, Python, or similar programming languages 3+ years of hands-on big data development experience with technologies such as Spark, Flink, SingleStore, Kafka, NiFi, and AWS big data tools Strong understanding of system and application design, architecture principles, and distributed system fundamentals Proven experience building highly available, scalable, and production-grade services Genuine passion for technology, with the ability to work across interdisciplinary areas and adopt new tools or approaches Experience processing massive datasets at the petabyte scale Proficiency with cloud infrastructure and DevOps tools, such as Terraform, Kubernetes (K8s), Spinnaker, IAM, and ALB Hands-on experience with modern data warehousing and analytics platforms, including ClickHouse, Druid, Snowflake, Impala, Presto, Kinesis, and more Familiarity with common web development frameworks, such as Spring Boot, React.js, Vue.js, or Angular Benefits: Medical, Dental, & Vision Insurance Plans Employee-Owned Profit Sharing (ESOP) 401K offered The approximate pay range for this position is between $52.00 and $75.00. Please note that the pay range provided is a good faith estimate. Final compensation may vary based on factors including but not limited to background, knowledge, skills, and location. We comply with local wage minimums.
    $52-75 hourly 2d ago
  • Senior Data Engineer

    Kellymitchell Group 4.5company rating

    Glendale, CA jobs

    Our client is seeking a Senior Data Engineer to join their team! This position is located in Glendale, California. Contribute to maintaining, updating, and expanding existing Core Data platform data pipelines Build tools and services to support data discovery, lineage, governance, and privacy Collaborate with other software and data engineers and cross-functional teams Work with a tech stack that includes Airflow, Spark, Databricks, Delta Lake, Kubernetes, and AWS Collaborate with product managers, architects, and other engineers to drive the success of the Core Data platform Contribute to developing and documenting internal and external standards and best practices for pipeline configurations, naming conventions, and more Ensure high operational efficiency and quality of Core Data platform datasets to meet SLAs and ensure reliability and accuracy for stakeholders in Engineering, Data Science, Operations, and Analytics Participate in agile and scrum ceremonies to collaborate and refine team processes Engage with customers to build relationships, understand needs, and prioritize both innovative solutions and incremental platform improvements Maintain detailed documentation of work and changes to support data quality and data governance requirements Desired Skills/Experience: 5+ years of data engineering experience developing large data pipelines Proficiency in at least one major programming language such as: Python, Java or Scala Strong SQL skills and the ability to create queries to analyze complex datasets Hands-on production experience with distributed processing systems such as Spark Experience interacting with and ingesting data efficiently from API data sources Experience coding with the Spark DataFrame API to create data engineering workflows in Databricks Hands-on production experience with data pipeline orchestration systems such as Airflow for creating and maintaining data pipelines Experience developing APIs with GraphQL Deep understanding of AWS or other cloud providers, as well as infrastructure-as-code Familiarity with data modeling techniques and data warehousing best practices Strong algorithmic problem-solving skills Excellent written and verbal communication skills Advanced understanding of OLTP versus OLAP environments Benefits: Medical, Dental, & Vision Insurance Plans Employee-Owned Profit Sharing (ESOP) 401K offered The approximate pay range for this position is between $51.00 and $73.00. Please note that the pay range provided is a good faith estimate. Final compensation may vary based on factors including but not limited to background, knowledge, skills, and location. We comply with local wage minimums.
    $51-73 hourly 1d ago
  • Data Architect - Azure Databricks

    Fractal 4.2company rating

    Palo Alto, CA jobs

    Fractal is a strategic AI partner to Fortune 500 companies with a vision to power every human decision in the enterprise. Fractal is building a world where individual choices, freedom, and diversity are the greatest assets; an ecosystem where human imagination is at the heart of every decision. Where no possibility is written off, only challenged to get better. We believe that a true Fractalite is the one who empowers imagination with intelligence. Fractal has been featured as a Great Place to Work by The Economic Times in partnership with the Great Place to Work Institute and recognized as a ‘Cool Vendor' and a ‘Vendor to Watch' by Gartner. Please visit Fractal | Intelligence for Imagination for more information about Fractal. Job Posting Title: Principal Architect - Azure Databricks Job Description Seeking a visionary and hands-on Principal Architect to lead large-scale, complex technical initiatives leveraging Databricks within the healthcare payer domain. This role is pivotal in driving data modernization, advanced analytics, and AI/ML solutions for our clients. You will serve as a strategic advisor, technical leader, and delivery expert across multiple engagements. Responsibilities: Design & Architecture of Scalable Data Platforms Design, develop, and maintain large-scale data processing architectures on the Databricks Lakehouse Platform to support business needs such as sales forecasting, trade promotions, supply chain optimization etc... Architect multi-layer data models including Bronze (raw), Silver (cleansed), and Gold (curated) layers for various domains (e.g., Retail Execution, Digital Commerce, Logistics, Category Management). Leverage Delta Lake, Unity Catalog, and advanced features of Databricks for governed data sharing, versioning, and reproducibility. Client & Business Stakeholder Engagement Partner with business stakeholders to translate functional requirements into scalable technical solutions. Conduct architecture workshops and solutioning sessions with enterprise IT and business teams to define data-driven use cases Data Pipeline Development & Collaboration Collaborate with data engineers and data scientists to develop end-to-end pipelines using PySpark, SQL, DLT (Delta Live Tables), and Databricks Workflows. Enable data ingestion from diverse sources such as ERP (SAP), POS data, Syndicated Data, CRM, e-commerce platforms, and third-party datasets. Performance, Scalability, and Reliability Optimize Spark jobs for performance tuning, cost efficiency, and scalability by configuring appropriate cluster sizing, caching, and query optimization techniques. Implement monitoring and alerting using Databricks Observability, Ganglia, Cloud-native tools Security, Compliance & Governance Design secure architectures using Unity Catalog, role-based access control (RBAC), encryption, token-based access, and data lineage tools to meet compliance policies. Establish data governance practices including Data Fitness Index, Quality Scores, SLA Monitoring, and Metadata Cataloging. Adoption of AI Copilots & Agentic Development Utilize GitHub Copilot, Databricks Assistant, and other AI code agents for: Writing PySpark, SQL, and Python code snippets for data engineering and ML tasks. Generating documentation and test cases to accelerate pipeline development. Interactive debugging and iterative code optimization within notebooks. Advocate for agentic AI workflows that use specialized agents for: Data profiling and schema inference. Automated testing and validation. Innovation and Continuous Learning Stay abreast of emerging trends in Lakehouse architectures, Generative AI, and cloud-native tooling. Evaluate and pilot new features from Databricks releases and partner integrations for modern data stack improvements. Requirements: Bachelor's or master's degree in computer science, Information Technology, or a related field. 12-18 years of hands-on experience in data engineering, with at least 5+ years on Databricks Architecture and Apache Spark. Expertise in building high-throughput, low-latency ETL/ELT pipelines on Azure Databricks using PySpark, SQL, and Databricks-native features. Familiarity with ingestion frameworks from structured/unstructured data sources including APIs, flat files, RDBMS, and cloud storage (Azure Data Lake Storage Gen2) Experience designing Lakehouse architectures with bronze, silver, gold layering. Expertise in optimizing Databricks performance using Delta Lake features such as OPTIMIZE, VACUUM, ZORDER, and Time Travel Strong understanding of data modelling concepts, star/snowflake schemas, dimensional modelling, and modern cloud-based data warehousing. Experience with designing Data marts using Databricks SQL warehouse and integrating with BI tools (Power BI, Tableau, etc.). Hands-on experience designing solutions using Workflows (Jobs), Delta Lake, Delta Live Tables (DLT), Unity Catalog, and MLflow. Familiarity with Databricks REST APIs, Notebooks, and cluster configurations for automated provisioning and orchestration. Experience in integrating Databricks with CI/CD pipelines using tools such as Azure DevOps, GitHub Actions. Knowledge of infrastructure-as-code (Terraform, ARM templates) for provisioning Databricks workspaces and resources In-depth experience with Azure Cloud services such as ADF, Synapse, ADLS, Key Vault, Azure Monitor, and Azure Security Centre. Strong understanding of data privacy, access controls, and governance best practices. Experience working with Unity Catalog, RBAC, tokenization, and data classification frameworks Worked as a consultant for more than 4-5 years with multiple clients Contribute to pre-sales, proposals, and client presentations as a subject matter expert. Participated and Lead RFP responses for your organization. Experience in providing solutions for technical problems and provide cost estimates Excellent communication skills for stakeholder interaction, solution presentations, and team coordination. Proven experience leading or mentoring global, cross-functional teams across multiple time zones and engagements. Ability to work independently in agile or hybrid delivery models, while guiding junior engineers and ensuring solution quality. Pay: The wage range for this role takes into account the wide range of factors that are considered in making compensation decisions including but not limited to skill sets; experience and training; licensure and certifications; and other business and organizational needs. The disclosed range estimate has not been adjusted for the applicable geographic differential associated with the location at which the position may be filled. At Fractal, it is not typical for an individual to be hired at or near the top of the range for their role and compensation decisions are dependent on the facts and circumstances of each case. A reasonable estimate of the current range is: $ 200,000 - $300,000. In addition, you may be eligible for a discretionary bonus for the current performance period. Benefits: As a full-time employee of the company or as an hourly employee working more than 30 hours per week, you will be eligible to participate in the health, dental, vision, life insurance, and disability plans in accordance with the plan documents, which may be amended from time to time. You will be eligible for benefits on the first day of employment with the Company. In addition, you are eligible to participate in the Company 401(k) Plan after 30 days of employment, in accordance with the applicable plan terms. The Company provides for 11 paid holidays and 12 weeks of Parental Leave. We also follow a “free time” PTO policy, allowing you the flexibility to take time needed for either sick time or vacation. Fractal provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
    $200k-300k yearly 2d ago
  • Data Governance Lead - Data Architecture & Governance

    Addison Group 4.6company rating

    New York, NY jobs

    Job Title: Data Governance Lead - Data Architecture & Governance Employment Type: Full-Time Base Salary: $220K to $250K (based on experience) + Bonus is eligible for medical, dental, vision About the Role: We are seeking an Experienced Data Governance Lead to join a dynamic data and analytics team in New York. This role will design and oversee the organization's data governance framework, stewardship model, and data quality approach across financial services business lines, ensuring trusted and well-defined data for reporting and analytics across Databricks lakehouse, CRM, management reporting, data science teams, and GenAI initiatives. Primary Responsibilities: Design, implement, and refine enterprise-wide data governance framework, including policies, standards, and roles for data ownership and stewardship. Lead the design of data quality monitoring, dashboards, reporting, and exception-handling processes, coordinating remediation with stewards and technology teams. Drive communication and change management for governance policies and standards, making them practical and understandable for business stakeholders. Define governance processes for critical data domains (e.g., companies, contacts, funds, deals, clients, sponsors) to ensure consistency, compliance, and business value. Identify and onboard business data owners and stewards across business teams. Partner with Data Solution Architects and business stakeholders to align definitions, semantics, and survivorship rules, including support for DealCloud implementations. Define and prioritize data quality rules and metrics for key data domains. Develop training and onboarding materials for stewards and users to reinforce governance practices and improve reporting, risk management, and analytics outcomes. Qualifications: 6-8 years in data governance, data management, or related roles, preferably within financial services. Strong understanding of data governance concepts, including stewardship models, data quality management, and issue-resolution processes. Familiarity with CRM or deal management platforms (e.g., DealCloud, Salesforce) and modern data platforms (e.g., Databricks or similar). Proficiency in SQL for data investigation, ad hoc analysis, and validation of data quality rules. Comfortable working with Databricks, Jupyter notebooks, Excel, and BI tools. Python skills for automation, data wrangling, profiling, and validation are strongly preferred. Exposure to investment banking, equities, or private markets data is a plus. Excellent written and verbal communication skills with the ability to lead cross-functional discussions and influence senior stakeholders. Highly organized, proactive, and able to balance strategic governance framework design with hands-on execution.
    $220k-250k yearly 1d ago
  • AWS Data Architect

    Fractal 4.2company rating

    San Jose, CA jobs

    Fractal is a strategic AI partner to Fortune 500 companies with a vision to power every human decision in the enterprise. Fractal is building a world where individual choices, freedom, and diversity are the greatest assets; an ecosystem where human imagination is at the heart of every decision. Where no possibility is written off, only challenged to get better. We believe that a true Fractalite is the one who empowers imagination with intelligence. Fractal has been featured as a Great Place to Work by The Economic Times in partnership with the Great Place to Work Institute and recognized as a ‘Cool Vendor' and a ‘Vendor to Watch' by Gartner. Please visit Fractal | Intelligence for Imagination for more information about Fractal. Fractal is looking for a proactive and driven AWS Lead Data Architect/Engineer to join our cloud and data tech team. In this role, you will work on designing the system architecture and solution, ensuring the platform is scalable while performant, and creating automated data pipelines. Responsibilities: Design & Architecture of Scalable Data Platforms Design, develop, and maintain large-scale data processing architectures on the Databricks Lakehouse Platform to support business needs Architect multi-layer data models including Bronze (raw), Silver (cleansed), and Gold (curated) layers for various domains (e.g., Retail Execution, Digital Commerce, Logistics, Category Management). Leverage Delta Lake, Unity Catalog, and advanced features of Databricks for governed data sharing, versioning, and reproducibility. Client & Business Stakeholder Engagement Partner with business stakeholders to translate functional requirements into scalable technical solutions. Conduct architecture workshops and solutioning sessions with enterprise IT and business teams to define data-driven use cases Data Pipeline Development & Collaboration Collaborate with data engineers and data scientists to develop end-to-end pipelines using Python, PySpark, SQL Enable data ingestion from diverse sources such as ERP (SAP), POS data, Syndicated Data, CRM, e-commerce platforms, and third-party datasets. Performance, Scalability, and Reliability Optimize Spark jobs for performance tuning, cost efficiency, and scalability by configuring appropriate cluster sizing, caching, and query optimization techniques. Implement monitoring and alerting using Databricks Observability, Ganglia, Cloud-native tools Security, Compliance & Governance Design secure architectures using Unity Catalog, role-based access control (RBAC), encryption, token-based access, and data lineage tools to meet compliance policies. Establish data governance practices including Data Fitness Index, Quality Scores, SLA Monitoring, and Metadata Cataloging. Adoption of AI Copilots & Agentic Development Utilize GitHub Copilot, Databricks Assistant, and other AI code agents for Writing PySpark, SQL, and Python code snippets for data engineering and ML tasks. Generating documentation and test cases to accelerate pipeline development. Interactive debugging and iterative code optimization within notebooks. Advocate for agentic AI workflows that use specialized agents for Data profiling and schema inference. Automated testing and validation. Innovation and Continuous Learning Stay abreast of emerging trends in Lakehouse architectures, Generative AI, and cloud-native tooling. Evaluate and pilot new features from Databricks releases and partner integrations for modern data stack improvements. Requirements: Bachelor's or master's degree in computer science, Information Technology, or a related field. 8-12 years of hands-on experience in data engineering, with at least 5+ years on Python and Apache Spark. Expertise in building high-throughput, low-latency ETL/ELT pipelines on AWS/Azure/GCP using Python, PySpark, SQL. Excellent hands on experience with workload automation tools such as Airflow, Prefect etc. Familiarity with building dynamic ingestion frameworks from structured/unstructured data sources including APIs, flat files, RDBMS, and cloud storage Experience designing Lakehouse architectures with bronze, silver, gold layering. Strong understanding of data modelling concepts, star/snowflake schemas, dimensional modelling, and modern cloud-based data warehousing. Experience with designing Data marts using Cloud data warehouses and integrating with BI tools (Power BI, Tableau, etc.). Experience CI/CD pipelines using tools such as AWS Code commit, Azure DevOps, GitHub Actions. Knowledge of infrastructure-as-code (Terraform, ARM templates) for provisioning platform resources In-depth experience with AWS Cloud services such as Glue, S3, Redshift etc. Strong understanding of data privacy, access controls, and governance best practices. Experience working with RBAC, tokenization, and data classification frameworks Excellent communication skills for stakeholder interaction, solution presentations, and team coordination. Proven experience leading or mentoring global, cross-functional teams across multiple time zones and engagements. Ability to work independently in agile or hybrid delivery models, while guiding junior engineers and ensuring solution quality Must be able to work in PST time zone. Pay: The wage range for this role takes into account the wide range of factors that are considered in making compensation decisions including but not limited to skill sets; experience and training; licensure and certifications; and other business and organizational needs. The disclosed range estimate has not been adjusted for the applicable geographic differential associated with the location at which the position may be filled. At Fractal, it is not typical for an individual to be hired at or near the top of the range for their role and compensation decisions are dependent on the facts and circumstances of each case. A reasonable estimate of the current range is: $150k - $180k. In addition, you may be eligible for a discretionary bonus for the current performance period. Benefits: As a full-time employee of the company or as an hourly employee working more than 30 hours per week, you will be eligible to participate in the health, dental, vision, life insurance, and disability plans in accordance with the plan documents, which may be amended from time to time. You will be eligible for benefits on the first day of employment with the Company. In addition, you are eligible to participate in the Company 401(k) Plan after 30 days of employment, in accordance with the applicable plan terms. The Company provides for 11 paid holidays and 12 weeks of Parental Leave. We also follow a “free time” PTO policy, allowing you the flexibility to take the time needed for either sick time or vacation. Fractal provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
    $150k-180k yearly 3d ago
  • AWS Data Architect

    Fractal 4.2company rating

    Santa Rosa, CA jobs

    Fractal is a strategic AI partner to Fortune 500 companies with a vision to power every human decision in the enterprise. Fractal is building a world where individual choices, freedom, and diversity are the greatest assets; an ecosystem where human imagination is at the heart of every decision. Where no possibility is written off, only challenged to get better. We believe that a true Fractalite is the one who empowers imagination with intelligence. Fractal has been featured as a Great Place to Work by The Economic Times in partnership with the Great Place to Work Institute and recognized as a ‘Cool Vendor' and a ‘Vendor to Watch' by Gartner. Please visit Fractal | Intelligence for Imagination for more information about Fractal. Fractal is looking for a proactive and driven AWS Lead Data Architect/Engineer to join our cloud and data tech team. In this role, you will work on designing the system architecture and solution, ensuring the platform is scalable while performant, and creating automated data pipelines. Responsibilities: Design & Architecture of Scalable Data Platforms Design, develop, and maintain large-scale data processing architectures on the Databricks Lakehouse Platform to support business needs Architect multi-layer data models including Bronze (raw), Silver (cleansed), and Gold (curated) layers for various domains (e.g., Retail Execution, Digital Commerce, Logistics, Category Management). Leverage Delta Lake, Unity Catalog, and advanced features of Databricks for governed data sharing, versioning, and reproducibility. Client & Business Stakeholder Engagement Partner with business stakeholders to translate functional requirements into scalable technical solutions. Conduct architecture workshops and solutioning sessions with enterprise IT and business teams to define data-driven use cases Data Pipeline Development & Collaboration Collaborate with data engineers and data scientists to develop end-to-end pipelines using Python, PySpark, SQL Enable data ingestion from diverse sources such as ERP (SAP), POS data, Syndicated Data, CRM, e-commerce platforms, and third-party datasets. Performance, Scalability, and Reliability Optimize Spark jobs for performance tuning, cost efficiency, and scalability by configuring appropriate cluster sizing, caching, and query optimization techniques. Implement monitoring and alerting using Databricks Observability, Ganglia, Cloud-native tools Security, Compliance & Governance Design secure architectures using Unity Catalog, role-based access control (RBAC), encryption, token-based access, and data lineage tools to meet compliance policies. Establish data governance practices including Data Fitness Index, Quality Scores, SLA Monitoring, and Metadata Cataloging. Adoption of AI Copilots & Agentic Development Utilize GitHub Copilot, Databricks Assistant, and other AI code agents for Writing PySpark, SQL, and Python code snippets for data engineering and ML tasks. Generating documentation and test cases to accelerate pipeline development. Interactive debugging and iterative code optimization within notebooks. Advocate for agentic AI workflows that use specialized agents for Data profiling and schema inference. Automated testing and validation. Innovation and Continuous Learning Stay abreast of emerging trends in Lakehouse architectures, Generative AI, and cloud-native tooling. Evaluate and pilot new features from Databricks releases and partner integrations for modern data stack improvements. Requirements: Bachelor's or master's degree in computer science, Information Technology, or a related field. 8-12 years of hands-on experience in data engineering, with at least 5+ years on Python and Apache Spark. Expertise in building high-throughput, low-latency ETL/ELT pipelines on AWS/Azure/GCP using Python, PySpark, SQL. Excellent hands on experience with workload automation tools such as Airflow, Prefect etc. Familiarity with building dynamic ingestion frameworks from structured/unstructured data sources including APIs, flat files, RDBMS, and cloud storage Experience designing Lakehouse architectures with bronze, silver, gold layering. Strong understanding of data modelling concepts, star/snowflake schemas, dimensional modelling, and modern cloud-based data warehousing. Experience with designing Data marts using Cloud data warehouses and integrating with BI tools (Power BI, Tableau, etc.). Experience CI/CD pipelines using tools such as AWS Code commit, Azure DevOps, GitHub Actions. Knowledge of infrastructure-as-code (Terraform, ARM templates) for provisioning platform resources In-depth experience with AWS Cloud services such as Glue, S3, Redshift etc. Strong understanding of data privacy, access controls, and governance best practices. Experience working with RBAC, tokenization, and data classification frameworks Excellent communication skills for stakeholder interaction, solution presentations, and team coordination. Proven experience leading or mentoring global, cross-functional teams across multiple time zones and engagements. Ability to work independently in agile or hybrid delivery models, while guiding junior engineers and ensuring solution quality Must be able to work in PST time zone. Pay: The wage range for this role takes into account the wide range of factors that are considered in making compensation decisions including but not limited to skill sets; experience and training; licensure and certifications; and other business and organizational needs. The disclosed range estimate has not been adjusted for the applicable geographic differential associated with the location at which the position may be filled. At Fractal, it is not typical for an individual to be hired at or near the top of the range for their role and compensation decisions are dependent on the facts and circumstances of each case. A reasonable estimate of the current range is: $150k - $180k. In addition, you may be eligible for a discretionary bonus for the current performance period. Benefits: As a full-time employee of the company or as an hourly employee working more than 30 hours per week, you will be eligible to participate in the health, dental, vision, life insurance, and disability plans in accordance with the plan documents, which may be amended from time to time. You will be eligible for benefits on the first day of employment with the Company. In addition, you are eligible to participate in the Company 401(k) Plan after 30 days of employment, in accordance with the applicable plan terms. The Company provides for 11 paid holidays and 12 weeks of Parental Leave. We also follow a “free time” PTO policy, allowing you the flexibility to take the time needed for either sick time or vacation. Fractal provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
    $150k-180k yearly 3d ago
  • AWS Data Architect

    Fractal 4.2company rating

    San Francisco, CA jobs

    Fractal is a strategic AI partner to Fortune 500 companies with a vision to power every human decision in the enterprise. Fractal is building a world where individual choices, freedom, and diversity are the greatest assets; an ecosystem where human imagination is at the heart of every decision. Where no possibility is written off, only challenged to get better. We believe that a true Fractalite is the one who empowers imagination with intelligence. Fractal has been featured as a Great Place to Work by The Economic Times in partnership with the Great Place to Work Institute and recognized as a ‘Cool Vendor' and a ‘Vendor to Watch' by Gartner. Please visit Fractal | Intelligence for Imagination for more information about Fractal. Fractal is looking for a proactive and driven AWS Lead Data Architect/Engineer to join our cloud and data tech team. In this role, you will work on designing the system architecture and solution, ensuring the platform is scalable while performant, and creating automated data pipelines. Responsibilities: Design & Architecture of Scalable Data Platforms Design, develop, and maintain large-scale data processing architectures on the Databricks Lakehouse Platform to support business needs Architect multi-layer data models including Bronze (raw), Silver (cleansed), and Gold (curated) layers for various domains (e.g., Retail Execution, Digital Commerce, Logistics, Category Management). Leverage Delta Lake, Unity Catalog, and advanced features of Databricks for governed data sharing, versioning, and reproducibility. Client & Business Stakeholder Engagement Partner with business stakeholders to translate functional requirements into scalable technical solutions. Conduct architecture workshops and solutioning sessions with enterprise IT and business teams to define data-driven use cases Data Pipeline Development & Collaboration Collaborate with data engineers and data scientists to develop end-to-end pipelines using Python, PySpark, SQL Enable data ingestion from diverse sources such as ERP (SAP), POS data, Syndicated Data, CRM, e-commerce platforms, and third-party datasets. Performance, Scalability, and Reliability Optimize Spark jobs for performance tuning, cost efficiency, and scalability by configuring appropriate cluster sizing, caching, and query optimization techniques. Implement monitoring and alerting using Databricks Observability, Ganglia, Cloud-native tools Security, Compliance & Governance Design secure architectures using Unity Catalog, role-based access control (RBAC), encryption, token-based access, and data lineage tools to meet compliance policies. Establish data governance practices including Data Fitness Index, Quality Scores, SLA Monitoring, and Metadata Cataloging. Adoption of AI Copilots & Agentic Development Utilize GitHub Copilot, Databricks Assistant, and other AI code agents for Writing PySpark, SQL, and Python code snippets for data engineering and ML tasks. Generating documentation and test cases to accelerate pipeline development. Interactive debugging and iterative code optimization within notebooks. Advocate for agentic AI workflows that use specialized agents for Data profiling and schema inference. Automated testing and validation. Innovation and Continuous Learning Stay abreast of emerging trends in Lakehouse architectures, Generative AI, and cloud-native tooling. Evaluate and pilot new features from Databricks releases and partner integrations for modern data stack improvements. Requirements: Bachelor's or master's degree in computer science, Information Technology, or a related field. 8-12 years of hands-on experience in data engineering, with at least 5+ years on Python and Apache Spark. Expertise in building high-throughput, low-latency ETL/ELT pipelines on AWS/Azure/GCP using Python, PySpark, SQL. Excellent hands on experience with workload automation tools such as Airflow, Prefect etc. Familiarity with building dynamic ingestion frameworks from structured/unstructured data sources including APIs, flat files, RDBMS, and cloud storage Experience designing Lakehouse architectures with bronze, silver, gold layering. Strong understanding of data modelling concepts, star/snowflake schemas, dimensional modelling, and modern cloud-based data warehousing. Experience with designing Data marts using Cloud data warehouses and integrating with BI tools (Power BI, Tableau, etc.). Experience CI/CD pipelines using tools such as AWS Code commit, Azure DevOps, GitHub Actions. Knowledge of infrastructure-as-code (Terraform, ARM templates) for provisioning platform resources In-depth experience with AWS Cloud services such as Glue, S3, Redshift etc. Strong understanding of data privacy, access controls, and governance best practices. Experience working with RBAC, tokenization, and data classification frameworks Excellent communication skills for stakeholder interaction, solution presentations, and team coordination. Proven experience leading or mentoring global, cross-functional teams across multiple time zones and engagements. Ability to work independently in agile or hybrid delivery models, while guiding junior engineers and ensuring solution quality Must be able to work in PST time zone. Pay: The wage range for this role takes into account the wide range of factors that are considered in making compensation decisions including but not limited to skill sets; experience and training; licensure and certifications; and other business and organizational needs. The disclosed range estimate has not been adjusted for the applicable geographic differential associated with the location at which the position may be filled. At Fractal, it is not typical for an individual to be hired at or near the top of the range for their role and compensation decisions are dependent on the facts and circumstances of each case. A reasonable estimate of the current range is: $150k - $180k. In addition, you may be eligible for a discretionary bonus for the current performance period. Benefits: As a full-time employee of the company or as an hourly employee working more than 30 hours per week, you will be eligible to participate in the health, dental, vision, life insurance, and disability plans in accordance with the plan documents, which may be amended from time to time. You will be eligible for benefits on the first day of employment with the Company. In addition, you are eligible to participate in the Company 401(k) Plan after 30 days of employment, in accordance with the applicable plan terms. The Company provides for 11 paid holidays and 12 weeks of Parental Leave. We also follow a “free time” PTO policy, allowing you the flexibility to take the time needed for either sick time or vacation. Fractal provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
    $150k-180k yearly 3d ago
  • Data Scientist

    Us Tech Solutions 4.4company rating

    Alhambra, CA jobs

    Title: Principal Data Scientist Duration: 12 Months Contract Additional Information California Resident Candidates Only. This position is HYBRID (2 days onsite, 2 days telework). Interviews will be conducted via Microsoft Teams. The work schedule follows a 4/40 (10-hour days, Monday-Thursday), with the specific shift determined by the program manager. Shifts may range between 7:15 a.m. and 6:00 p.m. Job description: The Principal Data Scientist works to establish a comprehensive Data Science Program to advance data-driven decision-making, streamline operations, and fully leverage modern platforms including Databricks, or similar, to meet increasing demand for predictive analytics and AI solutions. The Principal Data Scientist will guide program development, provide training and mentorship to junior members of the team, accelerate adoption of advanced analytics, and build internal capacity through structured mentorship. The Principal Data Scientist will possess exceptional communication abilities, both verbal and written, with a strong customer service mindset and the ability to translate complex concepts into clear, actionable insights; strong analytical and business acumen, including foundational experience with regression, association analysis, outlier detection, and core data analysis principles; working knowledge of database design and organization, with the ability to partner effectively with Data Management and Data Engineering teams; outstanding time management and organizational skills, with demonstrated success managing multiple priorities and deliverables in parallel; a highly collaborative work style, coupled with the ability to operate independently, maintain focus, and drive projects forward with minimal oversight; a meticulous approach to quality, ensuring accuracy, reliability, and consistency in all deliverables; and proven mentorship capabilities, including the ability to guide, coach, and upskill junior data scientists and analysts. Experience Required: Five (5)+ years of professional experience leading data science initiatives, including developing machine learning models, statistical analyses, and end-to-end data science workflows in production environments. Three (3)+ years of experience working with Databricks and similar cloud-based analytics platforms, including notebook development, feature engineering, ML model training, and workflow orchestration. Three (3)+ years of experience applying advanced analytics and predictive modeling (e.g., regression, classification, clustering, forecasting, natural language processing). Two (2)+ years of experience implementing MLOps practices, such as model versioning, CI/CD for ML, MLflow, automated pipelines, and model performance monitoring. Two (2)+ years of experience collaborating with data engineering teams to design data pipelines, optimize data transformations, and implement Lakehouse or data warehouse architectures (e.g., Databricks, Snowflake, SQL-based platforms). Two (2)+ years of experience mentoring or supervising junior data scientists or analysts, including code reviews, training, and structured skill development. Two (2)+ years of experience with Python and SQL programming, using data sources such as SQL Server, Oracle, PostgreSQL, or similar relational databases. One (1)+ year of experience operationalizing analytics within enterprise governance frameworks, partnering with Data Management, Security, and IT to ensure compliance, reproducibility, and best practices. Education Required & certifications: This classification requires possession of a Master's degree or higher in Data Science, Statistics, Computer Science, or a closely related field. Additional qualifying professional experience may be substituted for the required education on a year-for-year basis. At least one of the following industry-recognized certifications in data science or cloud analytics, such as: Microsoft Azure Data Scientist Associate (DP-100) Databricks Certified Data Scientist or Machine Learning Professional AWS Machine Learning Specialty Google Professional Data Engineer • or equivalent advanced analytics certifications. The certification is required and may not be substituted with additional experience. About US Tech Solutions: US Tech Solutions is a global staff augmentation firm providing a wide range of talent on-demand and total workforce solutions. To know more about US Tech Solutions, please visit ************************ US Tech Solutions is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, colour, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran. Recruiter Details: Name: T Saketh Ram Sharma Email: ***************************** Internal Id: 25-54101
    $92k-133k yearly est. 3d ago
  • AWS Data Architect

    Fractal 4.2company rating

    Fremont, CA jobs

    Fractal is a strategic AI partner to Fortune 500 companies with a vision to power every human decision in the enterprise. Fractal is building a world where individual choices, freedom, and diversity are the greatest assets; an ecosystem where human imagination is at the heart of every decision. Where no possibility is written off, only challenged to get better. We believe that a true Fractalite is the one who empowers imagination with intelligence. Fractal has been featured as a Great Place to Work by The Economic Times in partnership with the Great Place to Work Institute and recognized as a ‘Cool Vendor' and a ‘Vendor to Watch' by Gartner. Please visit Fractal | Intelligence for Imagination for more information about Fractal. Fractal is looking for a proactive and driven AWS Lead Data Architect/Engineer to join our cloud and data tech team. In this role, you will work on designing the system architecture and solution, ensuring the platform is scalable while performant, and creating automated data pipelines. Responsibilities: Design & Architecture of Scalable Data Platforms Design, develop, and maintain large-scale data processing architectures on the Databricks Lakehouse Platform to support business needs Architect multi-layer data models including Bronze (raw), Silver (cleansed), and Gold (curated) layers for various domains (e.g., Retail Execution, Digital Commerce, Logistics, Category Management). Leverage Delta Lake, Unity Catalog, and advanced features of Databricks for governed data sharing, versioning, and reproducibility. Client & Business Stakeholder Engagement Partner with business stakeholders to translate functional requirements into scalable technical solutions. Conduct architecture workshops and solutioning sessions with enterprise IT and business teams to define data-driven use cases Data Pipeline Development & Collaboration Collaborate with data engineers and data scientists to develop end-to-end pipelines using Python, PySpark, SQL Enable data ingestion from diverse sources such as ERP (SAP), POS data, Syndicated Data, CRM, e-commerce platforms, and third-party datasets. Performance, Scalability, and Reliability Optimize Spark jobs for performance tuning, cost efficiency, and scalability by configuring appropriate cluster sizing, caching, and query optimization techniques. Implement monitoring and alerting using Databricks Observability, Ganglia, Cloud-native tools Security, Compliance & Governance Design secure architectures using Unity Catalog, role-based access control (RBAC), encryption, token-based access, and data lineage tools to meet compliance policies. Establish data governance practices including Data Fitness Index, Quality Scores, SLA Monitoring, and Metadata Cataloging. Adoption of AI Copilots & Agentic Development Utilize GitHub Copilot, Databricks Assistant, and other AI code agents for Writing PySpark, SQL, and Python code snippets for data engineering and ML tasks. Generating documentation and test cases to accelerate pipeline development. Interactive debugging and iterative code optimization within notebooks. Advocate for agentic AI workflows that use specialized agents for Data profiling and schema inference. Automated testing and validation. Innovation and Continuous Learning Stay abreast of emerging trends in Lakehouse architectures, Generative AI, and cloud-native tooling. Evaluate and pilot new features from Databricks releases and partner integrations for modern data stack improvements. Requirements: Bachelor's or master's degree in computer science, Information Technology, or a related field. 8-12 years of hands-on experience in data engineering, with at least 5+ years on Python and Apache Spark. Expertise in building high-throughput, low-latency ETL/ELT pipelines on AWS/Azure/GCP using Python, PySpark, SQL. Excellent hands on experience with workload automation tools such as Airflow, Prefect etc. Familiarity with building dynamic ingestion frameworks from structured/unstructured data sources including APIs, flat files, RDBMS, and cloud storage Experience designing Lakehouse architectures with bronze, silver, gold layering. Strong understanding of data modelling concepts, star/snowflake schemas, dimensional modelling, and modern cloud-based data warehousing. Experience with designing Data marts using Cloud data warehouses and integrating with BI tools (Power BI, Tableau, etc.). Experience CI/CD pipelines using tools such as AWS Code commit, Azure DevOps, GitHub Actions. Knowledge of infrastructure-as-code (Terraform, ARM templates) for provisioning platform resources In-depth experience with AWS Cloud services such as Glue, S3, Redshift etc. Strong understanding of data privacy, access controls, and governance best practices. Experience working with RBAC, tokenization, and data classification frameworks Excellent communication skills for stakeholder interaction, solution presentations, and team coordination. Proven experience leading or mentoring global, cross-functional teams across multiple time zones and engagements. Ability to work independently in agile or hybrid delivery models, while guiding junior engineers and ensuring solution quality Must be able to work in PST time zone. Pay: The wage range for this role takes into account the wide range of factors that are considered in making compensation decisions including but not limited to skill sets; experience and training; licensure and certifications; and other business and organizational needs. The disclosed range estimate has not been adjusted for the applicable geographic differential associated with the location at which the position may be filled. At Fractal, it is not typical for an individual to be hired at or near the top of the range for their role and compensation decisions are dependent on the facts and circumstances of each case. A reasonable estimate of the current range is: $150k - $180k. In addition, you may be eligible for a discretionary bonus for the current performance period. Benefits: As a full-time employee of the company or as an hourly employee working more than 30 hours per week, you will be eligible to participate in the health, dental, vision, life insurance, and disability plans in accordance with the plan documents, which may be amended from time to time. You will be eligible for benefits on the first day of employment with the Company. In addition, you are eligible to participate in the Company 401(k) Plan after 30 days of employment, in accordance with the applicable plan terms. The Company provides for 11 paid holidays and 12 weeks of Parental Leave. We also follow a “free time” PTO policy, allowing you the flexibility to take the time needed for either sick time or vacation. Fractal provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
    $150k-180k yearly 3d ago
  • Sr. Desktop C/C++ Software Engineers (Med Devices, Biomed, or Healthcare)

    Entegee 4.3company rating

    San Diego, CA jobs

    ONSITE Sr. Windows Desktop Software Engineer (C/C++ 11) San Diego, CA Industry: Med Devices, Biotech, Biomed, Healthcare, or Life Sciences ***MUST have either a U.S. Citizenship, GC, EAD, or TN-1 visa*** Key to Role: * This is a Windows Desktop role NOT an Embedded SWE tole. If the Mgr sees a lot of Embedded he will disqualify the candidate. * Resumes MUST be thoroughly gone through assuring the correct/accurate information is on each applicable role before sending over. Role: • Architect, design, and develop driver and diagnostic software for intravascular ultrasound systems and associated test systems. • Developing Windows driver and diagnostic software for DigiPIM and CAT fixture • Interfacing with multi-disciplinary teams consisting of marketing, hardware, software, catheter design, and manufacturing to refine design requirements for next generation intravascular ultrasound devices. • Create software requirement specifications, software architecture documents, and detailed software design documents. • Design, develop, and debug driver and diagnostic software to implement communication between hardware and application software using C and/or C++. Minimum required Education: * Bachelor's / Master's Degree in Computer Science, Software Engineering, Information Technology or equivalent. * Minimum 8 years of experience with Bachelor's in areas such as Software Development, Software Design and Architecture using C/C++11 • Develop Windows driver and diagnostic software for DigiPIM and CAT fixture • Testing and Quality Assurance or equivalent OR no 4 years experience required with Master's Degree. * Preferred Skills: Software Test Automation Agile Methodology, TDD, Scrum, (SDLC) DevOps Business Acumen Continuous Improvement Version Control System Quality Specifications Software Design Code Reviews Programming Languages Debugging API Design API Integration Required for Trinity Project to maintain planned milestones.
    $105k-141k yearly est. 1d ago
  • Software Engineer

    Premier Group 4.5company rating

    San Jose, CA jobs

    Founding Engineer $140K - $200K + equity San Francisco (Onsite Role) Direct Hire A fast growing early-stage start who recently secured a significant amount at Seed is actively hiring 3x software engineers to join their founding team. They're looking for people who are scrappy, move fast, challenge assumptions, and are driven to win. They build quickly and expect teammates to push boundaries. Who You Are Make quick, reversible (“two-way door”) decisions Proactively fix problems before being asked Comfortable working across a modern engineering stack (e.g., TypeScript, Python, containerisation, ML/LLM tooling, databases, cloud environments, mobile frameworks) Have built real, shipped products Thrive in ambiguity and fast-moving environments What You'll Do Talk directly with users to understand their workflows, pain points, and needs Architect systems that support large enterprise usage Build automated pipelines and intelligent agents that process and verify large volumes of data Maintain scalable, robust infrastructure Ship quickly - progress over perfection The Reality You'll work closely with the founding team and directly with customers User value beats hype, trends, and “cool tech” Expect a demanding, high-output culture If you're a Software Engineer with 2 + years' experience and want to work in a growing start-up, please do apply now for immediate consideration.
    $140k-200k yearly 1d ago
  • Software Engineer

    Premier Group 4.5company rating

    Santa Rosa, CA jobs

    Founding Engineer $140K - $200K + equity San Francisco (Onsite Role) Direct Hire A fast growing early-stage start who recently secured a significant amount at Seed is actively hiring 3x software engineers to join their founding team. They're looking for people who are scrappy, move fast, challenge assumptions, and are driven to win. They build quickly and expect teammates to push boundaries. Who You Are Make quick, reversible (“two-way door”) decisions Proactively fix problems before being asked Comfortable working across a modern engineering stack (e.g., TypeScript, Python, containerisation, ML/LLM tooling, databases, cloud environments, mobile frameworks) Have built real, shipped products Thrive in ambiguity and fast-moving environments What You'll Do Talk directly with users to understand their workflows, pain points, and needs Architect systems that support large enterprise usage Build automated pipelines and intelligent agents that process and verify large volumes of data Maintain scalable, robust infrastructure Ship quickly - progress over perfection The Reality You'll work closely with the founding team and directly with customers User value beats hype, trends, and “cool tech” Expect a demanding, high-output culture If you're a Software Engineer with 2 + years' experience and want to work in a growing start-up, please do apply now for immediate consideration.
    $140k-200k yearly 1d ago
  • Software Engineer

    Premier Group 4.5company rating

    San Francisco, CA jobs

    Founding Engineer $140K - $200K + equity San Francisco (Onsite Role) Direct Hire A fast growing early-stage start who recently secured a significant amount at Seed is actively hiring 3x software engineers to join their founding team. They're looking for people who are scrappy, move fast, challenge assumptions, and are driven to win. They build quickly and expect teammates to push boundaries. Who You Are Make quick, reversible (“two-way door”) decisions Proactively fix problems before being asked Comfortable working across a modern engineering stack (e.g., TypeScript, Python, containerisation, ML/LLM tooling, databases, cloud environments, mobile frameworks) Have built real, shipped products Thrive in ambiguity and fast-moving environments What You'll Do Talk directly with users to understand their workflows, pain points, and needs Architect systems that support large enterprise usage Build automated pipelines and intelligent agents that process and verify large volumes of data Maintain scalable, robust infrastructure Ship quickly - progress over perfection The Reality You'll work closely with the founding team and directly with customers User value beats hype, trends, and “cool tech” Expect a demanding, high-output culture If you're a Software Engineer with 2 + years' experience and want to work in a growing start-up, please do apply now for immediate consideration.
    $140k-200k yearly 1d ago
  • Software Engineer

    Premier Group 4.5company rating

    Fremont, CA jobs

    Founding Engineer $140K - $200K + equity San Francisco (Onsite Role) Direct Hire A fast growing early-stage start who recently secured a significant amount at Seed is actively hiring 3x software engineers to join their founding team. They're looking for people who are scrappy, move fast, challenge assumptions, and are driven to win. They build quickly and expect teammates to push boundaries. Who You Are Make quick, reversible (“two-way door”) decisions Proactively fix problems before being asked Comfortable working across a modern engineering stack (e.g., TypeScript, Python, containerisation, ML/LLM tooling, databases, cloud environments, mobile frameworks) Have built real, shipped products Thrive in ambiguity and fast-moving environments What You'll Do Talk directly with users to understand their workflows, pain points, and needs Architect systems that support large enterprise usage Build automated pipelines and intelligent agents that process and verify large volumes of data Maintain scalable, robust infrastructure Ship quickly - progress over perfection The Reality You'll work closely with the founding team and directly with customers User value beats hype, trends, and “cool tech” Expect a demanding, high-output culture If you're a Software Engineer with 2 + years' experience and want to work in a growing start-up, please do apply now for immediate consideration.
    $140k-200k yearly 1d ago
  • Software Engineer

    Kellymitchell Group 4.5company rating

    Glendale, CA jobs

    Our client is seeking a Software Engineer to join their team! This position is located in Glendale, California. Collaborate on the design, development, and deployment of scalable, high-quality software solutions, leveraging best practices in software engineering, including coding standards, architecture design, and system reliability Demonstrate a strong proficiency in AWS platform tools and technologies, and leverage these tools effectively to build and maintain high-quality applications Work closely with a team of software engineers and product owners, as well as other engineering teams, security, and infrastructure, to deliver software solutions on time Prioritize and estimate work within an agile scrum process Stay up to date with industry trends, emerging technologies, and best practices Desired Skills/Experience: 3+ years of industry experience with a strong focus on application and shared services development Extensive experience with AWS platform tools and technologies, including Serverless Computing and API Gateway Strong proficiency in TypeScript, Java, Kotlin, or JavaScript Strong understanding of software engineering principles and best practices, including REST API development Team player with excellent problem-solving and communication skills Benefits: Medical, Dental, & Vision Insurance Plans Employee-Owned Profit Sharing (ESOP) 401K offered The approximate pay range for this position is between $57.04 and $81.48. Please note that the pay range provided is a good faith estimate. Final compensation may vary based on factors including but not limited to background, knowledge, skills, and location. We comply with local wage minimums.
    $57-81.5 hourly 5d ago

Learn more about Lilt jobs