SOC Engineer
Foster City, CA jobs
Source One is a consulting services company and we're currently looking for the following individuals to work for an on-demand, autonomous ride-hailing company in Foster City, CA.
** We are unable to work with third party companies or offer visa sponsorship for this role.
Title: SOC Engineer (contract)
Pay Rate: $94.25/hr (W-2)
Hybrid: 3 days/week on-site
Description: SOC Engineers to help enhance the company's security posture by driving automation and conducting proactive threat hunting. The ideal candidates have a strong InfoSec background with deep experience in SIEM and SOAR platforms, including rule and playbook development, along with proficiency in Python scripting for automation.
There are two positions: One role focused more on the SIEM side (Elastic is what they use, but Splunk ok), and the other role focused more on automation for detection.
As an SOC Engineer, you'll:
- Develop and fine-tune detection and correlation rules, dashboards, and reports within the SIEM to accurately detect anomalous activities.
- Create, manage, and optimize SOAR playbooks to automate incident response processes and streamline security operations.
- Utilize Python scripting to develop custom integrations and automate repetitive tasks within the SOC.
- Build and maintain automation workflows to enhance the efficiency of threat detection, alert triage, and incident response.
- Integrate various security tools and threat intelligence feeds with our SIEM and SOAR platforms using APIs and custom scripts.
- Conduct proactive threat hunting to identify potential security gaps and indicators of compromise.
- Analyze security alerts and data from various sources to identify and respond to potential security incidents.
- Collaborate with Information Security team members and other teams to enhance the overall security of the organization.
- Create and maintain clear and comprehensive documentation for detection rules, automation workflows, and incident response procedures.
Key Responsibilities:
- SIEM and SOAR Platform Management: Maintain our SIEM and SOAR platforms to ensure optimal performance and effectiveness in detecting and responding to security threats. Develop and fine-tune detection and correlation rules, dashboards, and reports within the SIEM to accurately detect anomalous activities. Create, manage, and optimize SOAR playbooks to automate incident response processes and streamline security operations.
- Automation and Scripting: Utilize Python scripting to develop custom integrations and automate repetitive tasks within the SOC. Build and maintain automation workflows to enhance the efficiency of threat detection, alert triage, and incident response. Integrate various security tools and threat intelligence feeds with our SIEM and SOAR platforms using APIs and custom scripts.
- Incident Response and Threat Hunting: Conduct proactive threat hunting to identify potential security gaps and indicators of compromise. Analyze security alerts and data from various sources to identify and respond to potential security incidents.
- Collaboration and Documentation: Collaborate with Information Security team members and other teams to enhance the overall security of the organization. Create and maintain clear and comprehensive documentation for detection rules, automation workflows, and incident response procedures.
Top Skills:
- SIEM: InfoSec background Incident response/threat hunting Rule creation (some query language experience needed)
- SOAR/Automation: Python automation, big data, systems Cortex XSOAR is pretty established - maintaining existing playbooks, logic changes, bug fixes
Required:
- 6+ years of experience in a Security Operations Center (SOC) environment or a similar cybersecurity role
- Hands-on experience with managing and configuring SIEM platforms (e.g., Elastic SIEM, Splunk, QRadar, Microsoft Sentinel)
- Demonstrable experience with SOAR platforms (e.g., Palo Alto Cortex XSOAR, Splunk SOAR) and playbook development
- Proficiency in Python for scripting and automation of security tasks
- Strong understanding of incident response methodologies, threat intelligence, and cybersecurity frameworks (e.g., MITRE ATT&CK, NIST)
- Excellent analytical and problem-solving skills with the ability to work effectively in a fast-paced environment
Preferred:
- Relevant industry certifications such as CISSP, GCIH, or similar
- Experience with cloud security and environmental constructs (AWS, Azure, GCP)
- Familiarity with other scripting languages (e.g., PowerShell, Bash)
- Knowledge of network and endpoint security solutions
Powertrain Engineer
Georgetown, KY jobs
Job Title: Powertrain Engineer
Job Type: Contract
Founded in 1996, CTC is a global IT services, Consulting and Business Solutions partner dedicated to helping organizations innovate, optimize, and grow. With over 2,000 professionals worldwide, we support more than 100 clients in transforming complex challenges into lasting competitive advantages.
Job Description:
Developing process plans for production lines within the powertrain department with direct support from the Manager.
Directly supporting other engineers or technicians through mentorship.
Leading the cost planning, budget creation, machine specification, procurement, validation, installation, pre-productions trials and launch of production lines.
Creating and maintaining detail schedule and budget for assigned area.
Participate and lead cross functional working groups across the local facility and corporate departments to achieve project goals.
Construction / Equipment installations
Requirements:
Bachelor's degree required, preferred in Mechanical
5+ years exp needed.
Toyota powertrain exp preferred
Mechanical Background preferred
Exp in equipment specifications and implementation.
Design drawings.
Run and & install equipment (Engine manufacturing)
Education:
Bachelor's degree needed, preferred in Mechanical.
Neo4j Engineer
Summit, NJ jobs
Must Have Technical/Functional Skills
Neo4j, Graph Data Science, Cypher, Python, Graph Algorithms, Bloom, GraphXR, Cloud, Kubernetes, ETL
Roles & Responsibilities
Design and implement graph-based data models using Neo4j.
Develop Cypher queries and procedures for efficient graph traversal and analysis.
Apply Graph Data Science algorithms for community detection, centrality, and similarity.
Integrate Neo4j with enterprise data platforms and APIs.
Collaborate with data scientists and engineers to build graph-powered applications.
Optimize performance and scalability of graph queries and pipelines.
Support deployment and monitoring of Neo4j clusters in cloud or on-prem environments.
Salary Range: $110,000 $140,000 Year
TCS Employee Benefits Summary:
Discretionary Annual Incentive.
Comprehensive Medical Coverage: Medical & Health, Dental & Vision, Disability Planning & Insurance, Pet Insurance Plans.
Family Support: Maternal & Parental Leaves.
Insurance Options: Auto & Home Insurance, Identity Theft Protection.
Convenience & Professional Growth: Commuter Benefits & Certification & amp; Training Reimbursement.
Time Off: Vacation, Time Off, Sick Leave & Holidays.
Legal & Financial Assistance: Legal Assistance, 401K Plan, Performance Bonus, College Fund, Student Loan Refinancing.
Databricks Engineer
Seattle, WA jobs
5+ years of experience in data engineering or similar roles.
Strong expertise in Databricks, Apache Spark, and PySpark.
Proficiency in SQL, Python, and data modeling concepts.
Experience with cloud platforms (Azure preferred; AWS/GCP is a plus).
Knowledge of Delta Lake, Lakehouse architecture, and partitioning strategies.
Familiarity with data governance, security best practices, and performance tuning.
Hands-on experience with version control (Git) and CI/CD pipelines.
Roles & Responsibilities:
Design and develop ETL/ELT pipelines using Azure Databricks and Apache Spark.
Integrate data from multiple sources into the data lake and data warehouse environments.
Optimize data workflows for performance and cost efficiency in cloud environments (Azure/AWS/GCP).
Implement data quality checks, monitoring, and alerting for pipelines.
Collaborate with data scientists and analysts to provide clean, curated datasets.
Ensure compliance with data governance, security, and privacy standards.
Automate workflows using CI/CD pipelines and orchestration tools (e.g., Airflow, Azure Data Factory).
Troubleshoot and resolve issues in data pipelines and platform components.
TCS Employee Benefits Summary:
Discretionary Annual Incentive.
Comprehensive Medical Coverage: Medical & Health, Dental & Vision, Disability Planning & Insurance, Pet Insurance Plans.
Family Support: Maternal & Parental Leaves.
Insurance Options: Auto & Home Insurance, Identity Theft Protection.
Convenience & Professional Growth: Commuter Benefits & Certification & Training Reimbursement.
Time Off: Vacation, Time Off, Sick Leave & Holidays.
Legal & Financial Assistance: Legal Assistance , 401K Plan, Performance Bonus, College Fund, Student Loan Refinancing.
#LI-RJ2
Salary Range - $100,000-$140,000 a year
GCP engineer with Bigquery, Pyspark
Phoenix, AZ jobs
Job Title : GCP engineer with Bigquery, Pyspark
Experience Required - 7+ Years
Must Have Technical/Functional Skills
GCP Engineer with Bigquery, Pyspark and Python experience
Roles & Responsibilities
· 6+ years of professional experience with at least 4+ years of GCP Data Engineer experience
· Experience working on GCP application Migration for large enterprise
· Hands on Experience with Google Cloud Platform (GCP)
· Extensive experience with ETL/ELT tools and data transformation frameworks
· Working knowledge of data storage solutions like Big Query or Cloud SQL
· Solid skills in data orchestration tools like AirFlow or Cloud Workflows.
· Familiarity with Agile development methods.
· Hands on experience with Spark, Python ,PySpark APIs.
Knowledge of various Shell Scripting tools
Salary Range - $90,000 to $120,000 per year
Interested candidates please do share me your updated resume to *******************
TCS Employee Benefits Summary:
Discretionary Annual Incentive.
Comprehensive Medical Coverage: Medical & Health, Dental & Vision, Disability Planning & Insurance, Pet Insurance Plans.
Family Support: Maternal & Parental Leaves.
Insurance Options: Auto & Home Insurance, Identity Theft Protection.
Convenience & Professional Growth: Commuter Benefits & Certification & Training Reimbursement.
Time Off: Vacation, Time Off, Sick Leave & Holidays.
Legal & Financial Assistance: Legal Assistance, 401K Plan, Performance Bonus, College Fund, Student Loan Refinancing.
Kubernetes Engineer
Plano, TX jobs
Hands on experience of Kubernetes engineering and development.
Minimum 5-7+ years of experience in working with hybrid Infra architectures
Experience in analyzing the architecture of On Prem Infrastructure for Applications (Network, Storage, Processing, Backup/DR etc).
Strong understanding of Infrastructure capacity planning, monitoring, upgrades, IaaC automations using Terraform, Ansible, CICD using Jenkins/Github Actions.
Experience working with engineering teams to define best practices and processes as appropriate to support the entire infrastructure lifecycle - Plan, Build, Deploy, and Operate such as automate lifecycle activities - self-service, orchestration and provisioning, configuration management.
Experience working with engineering teams to define best practices and processes as appropriate to support the entire infrastructure lifecycle - Plan, Build, Deploy, and Operate such as automate lifecycle activities - self-service, orchestration and provisioning, configuration management.
Experience defining infrastructure direction.
Drive continuous improvement including design, and standardization of process and methodologies.
Experience assessing feasibility, complexity and scope of new capabilities and solutions
Base Salary Range: $100,000 - $110,000 per annum
TCS Employee Benefits Summary:
Discretionary Annual Incentive.
Comprehensive Medical Coverage: Medical & Health, Dental & Vision, Disability Planning & Insurance, Pet Insurance Plans.
Family Support: Maternal & Parental Leaves.
Insurance Options: Auto & Home Insurance, Identity Theft Protection.
Convenience & Professional Growth: Commuter Benefits & Certification & Training Reimbursement.
Time Off: Vacation, Time Off, Sick Leave & Holidays.
Legal & Financial Assistance: Legal Assistance, 401K Plan, Performance Bonus, College Fund, Student Loan Refinancing.
Data Engineer
Atlanta, GA jobs
No C2C
We're looking for a hands-on Data Engineer to help build, scale, and fine-tune real-time data systems using Kafka, AWS, and a modern data stack. In this role, you'll work deeply with streaming data, ETL, distributed systems, and PostgreSQL to power analytics, product innovation, and AI-driven use cases. You'll also get to work with AI/ML frameworks, automation, and MLOps tools to support advanced modeling and a highly responsive data platform.
What You'll Do
Design and build real-time streaming pipelines using Kafka, Confluent Schema Registry, and Zookeeper
Build and manage cloud-based data workflows using AWS services like Glue, EMR, EC2, and S3
Optimize and maintain PostgreSQL and other databases with strong schema design, advanced SQL, and performance tuning
Integrate AI and ML frameworks (TensorFlow, PyTorch, Hugging Face) into data pipelines for training and inference
Automate data quality checks, feature generation, and anomaly detection using AI-powered monitoring and observability tools
Partner with ML engineers to deploy, monitor, and continuously improve machine learning models in both batch and real-time pipelines using tools like MLflow, SageMaker, Airflow, and Kubeflow
Experiment with vector databases and retrieval-augmented generation (RAG) pipelines to support GenAI and LLM initiatives
Build scalable, cloud-native, event-driven architectures that power AI-driven data products
What You Bring
Bachelor's degree in Computer Science, Engineering, Math, or a related technical field
3+ years of hands-on data engineering experience with Kafka (Confluent or open-source) and AWS
Experience with automated data quality, monitoring, and observability tools
Strong SQL skills and solid database fundamentals with PostgreSQL and both traditional and NoSQL databases
Proficiency in Python, Scala, or Java for pipeline development and AI integrations
Experience with synthetic data generation, vector databases, or GenAI-powered data products
Hands-on experience integrating ML models into production data pipelines using frameworks like PyTorch or TensorFlow and MLOps tools such as Airflow, MLflow, SageMaker, or Kubeflow
Data Engineer
Houston, TX jobs
We are looking for a talented and motivated Python Data Engineers. We need help expanding our data assets in support of our analytical capabilities in a full-time role. This role will have the opportunity to interface directly with our traders, analysts, researchers and data scientists to drive out requirements and deliver a wide range of data related needs.
What you will do:
- Translate business requirements into technical deliveries. Drive out requirements for data ingestion and access
- Maintain the cleanliness of our Python codebase, while adhering to existing designs and coding conventions as much as possible
- Contribute to our developer tools and Python ETL toolkit, including standardization and consolidation of core functionality
- Efficiently coordinate with the rest of our team in different locations
Qualifications
- 6+ years of enterprise-level coding experience with Python
- Computer Science, MIS or related degree
- Familiarity with Pandas and NumPy packages
- Experience with Data Engineering and building data pipelines
- Experience scraping websites with Requests, Beautiful Soup, Selenium, etc.
- Strong understating of object-oriented design, design patterns, SOA architectures
- Proficient understanding of peer-reviewing, code versioning, and bug/issue tracking tools.
- Strong communication skills
- Familiarity with containerization solutions like Docker and Kubernetes is a plus
Data Engineer
Irvine, CA jobs
Thank you for stopping by to take a look at the Data Integration Engineer role I posted here on LinkedIN, I appreciate it.
If you have read my s in the past, you will recognize how I write job descriptions. If you are new, allow me to introduce myself. My name is Tom Welke. I am Partner & VP at RSM Solutions, Inc and I have been recruiting technical talent for more than 23 years and been in the tech space since the 1990s. Due to this, I actually write JD's myself...no AI, no 'bots', just a real live human. I realized a while back that looking for work is about as fun as a root canal with no anesthesia...especially now. So, rather than saying 'must work well with others' and 'team mindset', I do away with that kind of nonsense and just tell it like it is.
So, as with every role I work on, social fit is almost as important as technical fit. For this one, technical fit is very very important. But, we also have some social fit characteristics that are important. This is the kind of place that requires people to dive in and learn. The hiring manager for this one is actually a very dear friend of mine. He said something interesting to me not all that long ago. He mentioned, if you aren't spending at least an hour a day learning something new, you really are doing yourself a disservice. This is that classic environment where no one says 'this is not my job'. So that ability to jump in and help is needed for success in this role.
This role is being done onsite in Irvine, California. I prefer working with candidates that are already local to the area. If you need to relocate, that is fine, but there are no relocation dollars available.
I can only work with US Citizens or Green Card Holders for this role. I cannot work with H1, OPT, EAD, F1, H4, or anyone that is not already a US Citizen or Green Card Holder for this role.
The Data Engineer role is similar to the Data Integration role I posted. However, this one is mor Ops focused, with the orchestration of deployment and ML flow, and including orchestrating and using data on the clusters and managing how the models are performing. This role focuses on coding & configuring on the ML side of the house.
You will be designing, automating, and observing end to end data pipelines that feed this client's Kubeflow driven machine learning platform, ensuring models are trained, deployed, and monitored on trustworthy, well governed data. You will build batch/stream workflows, wire them into Azure DevOps CI/CD, and surface real time health metrics in Prometheus + Grafana dashboards to guarantee data availability. The role bridges Data Engineering and MLOps, allowing data scientists to focus on experimentation and the business sees rapid, reliable predictive insight.
Here are some of the main responsibilities:
Design and implement batch and streaming pipelines in Apache Spark running on Kubernetes and Kubeflow Pipelines to hydrate feature stores and training datasets.
Build high throughput ETL/ELT jobs with SSIS, SSAS, and T SQL against MS SQL Server, applying Data Vault style modeling patterns for auditability.
Integrate source control, build, and release automation using GitHub Actions and Azure DevOps for every pipeline component.
Instrument pipelines with Prometheus exporters and visualize SLA, latency, and error budget metrics to enable proactive alerting.
Create automated data quality and schema drift checks; surface anomalies to support a rapid incident response process.
Use MLflow Tracking and Model Registry to version artifacts, parameters, and metrics for reproducible experiments and safe rollbacks.
Work with data scientists to automate model retraining and deployment triggers within Kubeflow based on data freshness or concept drift signals.
Develop PowerShell and .NET utilities to orchestrate job dependencies, manage secrets, and publish telemetry to Azure Monitor.
Optimize Spark and SQL workloads through indexing, partitioning, and cluster sizing strategies, benchmarking performance in CI pipelines.
Document lineage, ownership, and retention policies; ensure pipelines conform to PCI/SOX and internal data governance standards.
Here is what we are seeking:
At least 6 years of experience building data pipelines in Spark or equivalent.
At least 2 years deploying workloads on Kubernetes/Kubeflow.
At least 2 years of experience with MLflow or similar experiment‑tracking tools.
At least 6 years of experience in T‑SQL, Python/Scala for Spark.
At least 6 years of PowerShell/.NET scripting.
At least 6 years of experience with with GitHub, Azure DevOps, Prometheus, Grafana, and SSIS/SSAS.
Kubernetes CKA/CKAD, Azure Data Engineer (DP‑203), or MLOps‑focused certifications (e.g., Kubeflow or MLflow) would be great to see.
Mentor engineers on best practices in containerized data engineering and MLOps.
Data Conversion Engineer
Charlotte, NC jobs
Summary/Objective
Are you looking to work at a high growth, innovative, and purpose driven FinTech company? If so, you'll love Paymentus. Recognized by Deloitte as one of the fastest growing companies in North America, Paymentus is the premier provider of innovative, reliable, and secure electronic bill payment and presentment for more than 1700 clients. We are a SaaS provider that enables companies to help their customers simplify their financial lives. We do that by making it easier for consumers and businesses to pay bills, plus move and manage money to achieve strong financial health. We continually build upon a massively scalable platform, supporting thousands of businesses and millions of transactions on a daily basis. We're looking for high performers to join our team who excel in their expertise and who can transform plans into action. You'll have the opportunity to grow in an environment where intelligence, innovation, and leadership are valued and rewarded.
About the Role
The Data Conversion Engineer serves as a key component of the Platform Integrations team, providing technical support and guidance on data conversion projects. Conversions are an integral part in ensuring adherence to Paymentus' standards for a successful launch. This role is essential to ensure all bill payment data converts properly and efficiently onto the Paymentus platform.
Responsibilities
Develop data conversion procedures using SQL, Java and Linux scripting
Augment and automate existing manual procedures to optimize accuracy and reduce time for each conversion
Develop and update conversion mappers to interpret incoming data and manipulate it to match Paymentus' specifications
Develop new specifications to satisfy new customers and products
Serve as the primary point of contact/driver for all technical related conversion activities
Review conversion calendar and offer technical support and solutions to meet deadlines and contract dates
Maintain and update technical conversion documentation to share with internal and external clients and partners
Work in close collaboration with implementation, integration, product and development teams using exceptional communication skills
Adapt and creatively solve encountered problems under high stress and tight deadlines
Learn database structure, business logic and combine all knowledge to improve processes
Be flexible
Monitor new client conversions and existing client support if needed; provide daily problem solving, coordination, and communication
Management of multiple projects and conversion implementations
Ability to proactively troubleshoot and solve problems with limited supervision
Qualifications
B.S. Degree in Computer Science or comparable experience
Strong knowledge of Linux and the command line interface
Exceptional SQL skills
Experience with logging/monitoring tools (AWS Cloudwatch, Splunk, ELK, etc.)
Familiarity with various online banking applications and understanding of third-party integrations is a plus
Effective written and verbal communication skills
Problem Solver - recognizes the need to resolve issues quickly and effectively, uses logic to solve problems; identifies problems and brings forward multiple solution options; knows who/when to involve appropriate people when troubleshooting issues
Communication; ability to use formal and informal written and/or verbal communication channels to inform others; articulates ideas and thoughts clearly both verbally and in writing
Dynamic and self-motivated; able to work on their own initiative and deliver the objectives required to maintain service levels
Strong attention to detail
Proficiency with raw data, analytics, or data reporting tools
Preferred Skills
Background in the Payments, Banking, E-Commerce, Finance and/or Utility industries
Experience with front end web interfaces (HTML5, Javascript, CSS3)
Cloud technologies (AWS, GCP, Azure)
Work Environment
This job operates in a professional office environment. This role routinely uses standard office equipment such as laptop computers, photocopiers and smartphones.
Physical Demands
This role requires sitting or standing at a computer workstation for extended periods of time.
Position Type/Expected Hours of Work
This is a full-time position. Days and hours of work are Monday through Friday, 40 hours a week. Occasional evening and weekend work may be required as job duties demand.
Travel
No travel is required for this position.
Other Duties
Please note this job description is not designed to cover or contain a comprehensive listing of activities, duties or responsibilities that are required of the employee for this job. Duties, responsibilities and activities may change at any time with or without notice.
Equal Opportunity Statement
Paymentus is an equal opportunity employer. We enthusiastically accept our responsibility to make employment decisions without regard to race, religious creed, color, age, sex, sexual orientation, national origin, ancestry, citizenship status, religion, marital status, disability, military service or veteran status, genetic information, medical condition including medical characteristics, or any other classification protected by applicable federal, state, and local laws and ordinances. Our management is dedicated to ensuring the fulfillment of this policy with respect to hiring, placement, promotion, transfer, demotion, layoff, termination, recruitment advertising, pay, and other forms of compensation, training, and general treatment during employment.
Reasonable Accommodation
Paymentus recognizes and supports its obligation to endeavor to accommodate job applicants and employees with known physical or mental disabilities who are able to perform the essential functions of the position, with or without reasonable accommodation. Paymentus will endeavor to provide reasonable accommodations to otherwise qualified job applicants and employees with known physical or mental disabilities, unless doing so would impose an undue hardship on the Company or pose a direct threat of substantial harm to the employee or others. An applicant or employee who believes he or she needs a reasonable accommodation of a disability should discuss the need for possible accommodation with the Human Resources Department, or his or her direct supervisor.
Data Platform Engineer / AI Workloads
San Jose, CA jobs
We are actively searching for a Data Infrastructure Engineer to join our team on a permanent basis. In this founding engineer role you will focus on building next-generation data infrastructure for our AI platform. If you have a passion for distributed systems, unified storage, orchestration, and retrieval for AI workloads we would love to speak with you.
Your Rhythm:
Design, build, and maintain data infrastructure systems such as distributed compute, data orchestration, distributed storage, streaming infrastructure, machine learning infrastructure while ensuring scalability, reliability, and security
Ensure our data platform can scale by orders of magnitude while remaining reliable and efficient
Tackle complex challenges in distributed systems, databases, and AI infrastructure
Collaborate with technical leadership to define and refine the product roadmap
Write high-quality, well-tested, and maintainable code
Contribute to the open-source community and engage with developers in the space
Your Vibe:
5+ years experience designing building distributed database systems
Expertise in building and operating scalable, reliable and secure database infrastructure systems
Strong knowledge around distributed compute, data orchestration, distributed storage, streaming infrastructure
Strong knowledge of SQL and NoSQL databases, such as MySQL, Postgres, and MongoDB.
Programming skills in Python
Passion for building developer tools and scalable infrastructure
Our Vibe:
Relaxed work environment
100% paid top of the line health care benefits
Full ownership, no micro management
Strong equity package
401K
Unlimited vacation
An actual work/life balance, we aren't trying to run you into the ground. We have families and enjoy life too!
Sr. Data Engineer
Dallas, TX jobs
Trinity Industries is searching for a Sr. Data Engineer to join our Data Analytics team in Dallas, TX! The successful candidate will work with the Trinity Rail teams to develop and maintain data pipelines in Azure utilizing Databricks, Python and SQL.
Join our team today and be a part of Delivering Goods for the Good of All!
What you'll do:
Facilitate technical design of complex data sourcing, transformation and aggregation logic, ensuring business analytics requirements are met
Work with leadership to prioritize business and information needs
Engage with product and app development teams to gather requirements, and create technical requirements
Utilize and implement data engineering best practices and coding strategies
Be responsible for data ingress into storage
What you'll need:
Bachelor's Degree Computer Science, Information Management, or related field required; Masters preferred
8+ years in data engineering including prior experience in data transformation
Databricks experience building data pipelines using the medallion architecture, bronze to gold
Advanced skills in Spark and structured streaming, SQL, Python
Technical expertise regarding data models, database design/development, data mining and other segmentation techniques
Experience with data conversion, interface and report development
Experience working with IOT and/or geospatial data in a cloud environment (Azure)
Adept at queries, report writing and presenting findings
Prior experience coding utilizing repositories and multiple coding environments
Must possess effective communication skills, both verbal and written
Strong organizational, time management and multi-tasking skills
Experience with data conversion, interface and report development
Adept at queries, report writing and presenting findings
Process improvement and automation a plus
Nice to have:
Databricks Data Engineering Associate or Professional Certification > 2023
Data Platform Engineer / AI Workloads
Santa Rosa, CA jobs
We are actively searching for a Data Infrastructure Engineer to join our team on a permanent basis. In this founding engineer role you will focus on building next-generation data infrastructure for our AI platform. If you have a passion for distributed systems, unified storage, orchestration, and retrieval for AI workloads we would love to speak with you.
Your Rhythm:
Design, build, and maintain data infrastructure systems such as distributed compute, data orchestration, distributed storage, streaming infrastructure, machine learning infrastructure while ensuring scalability, reliability, and security
Ensure our data platform can scale by orders of magnitude while remaining reliable and efficient
Tackle complex challenges in distributed systems, databases, and AI infrastructure
Collaborate with technical leadership to define and refine the product roadmap
Write high-quality, well-tested, and maintainable code
Contribute to the open-source community and engage with developers in the space
Your Vibe:
5+ years experience designing building distributed database systems
Expertise in building and operating scalable, reliable and secure database infrastructure systems
Strong knowledge around distributed compute, data orchestration, distributed storage, streaming infrastructure
Strong knowledge of SQL and NoSQL databases, such as MySQL, Postgres, and MongoDB.
Programming skills in Python
Passion for building developer tools and scalable infrastructure
Our Vibe:
Relaxed work environment
100% paid top of the line health care benefits
Full ownership, no micro management
Strong equity package
401K
Unlimited vacation
An actual work/life balance, we aren't trying to run you into the ground. We have families and enjoy life too!
Data Platform Engineer / AI Workloads
San Francisco, CA jobs
We are actively searching for a Data Infrastructure Engineer to join our team on a permanent basis. In this founding engineer role you will focus on building next-generation data infrastructure for our AI platform. If you have a passion for distributed systems, unified storage, orchestration, and retrieval for AI workloads we would love to speak with you.
Your Rhythm:
Design, build, and maintain data infrastructure systems such as distributed compute, data orchestration, distributed storage, streaming infrastructure, machine learning infrastructure while ensuring scalability, reliability, and security
Ensure our data platform can scale by orders of magnitude while remaining reliable and efficient
Tackle complex challenges in distributed systems, databases, and AI infrastructure
Collaborate with technical leadership to define and refine the product roadmap
Write high-quality, well-tested, and maintainable code
Contribute to the open-source community and engage with developers in the space
Your Vibe:
5+ years experience designing building distributed database systems
Expertise in building and operating scalable, reliable and secure database infrastructure systems
Strong knowledge around distributed compute, data orchestration, distributed storage, streaming infrastructure
Strong knowledge of SQL and NoSQL databases, such as MySQL, Postgres, and MongoDB.
Programming skills in Python
Passion for building developer tools and scalable infrastructure
Our Vibe:
Relaxed work environment
100% paid top of the line health care benefits
Full ownership, no micro management
Strong equity package
401K
Unlimited vacation
An actual work/life balance, we aren't trying to run you into the ground. We have families and enjoy life too!
Data Platform Engineer / AI Workloads
Fremont, CA jobs
We are actively searching for a Data Infrastructure Engineer to join our team on a permanent basis. In this founding engineer role you will focus on building next-generation data infrastructure for our AI platform. If you have a passion for distributed systems, unified storage, orchestration, and retrieval for AI workloads we would love to speak with you.
Your Rhythm:
Design, build, and maintain data infrastructure systems such as distributed compute, data orchestration, distributed storage, streaming infrastructure, machine learning infrastructure while ensuring scalability, reliability, and security
Ensure our data platform can scale by orders of magnitude while remaining reliable and efficient
Tackle complex challenges in distributed systems, databases, and AI infrastructure
Collaborate with technical leadership to define and refine the product roadmap
Write high-quality, well-tested, and maintainable code
Contribute to the open-source community and engage with developers in the space
Your Vibe:
5+ years experience designing building distributed database systems
Expertise in building and operating scalable, reliable and secure database infrastructure systems
Strong knowledge around distributed compute, data orchestration, distributed storage, streaming infrastructure
Strong knowledge of SQL and NoSQL databases, such as MySQL, Postgres, and MongoDB.
Programming skills in Python
Passion for building developer tools and scalable infrastructure
Our Vibe:
Relaxed work environment
100% paid top of the line health care benefits
Full ownership, no micro management
Strong equity package
401K
Unlimited vacation
An actual work/life balance, we aren't trying to run you into the ground. We have families and enjoy life too!
AWS Data Engineer
Seattle, WA jobs
Must Have Technical/Functional Skills:
We are seeking an experienced AWS Data Engineer to join our data team and play a crucial role in designing, implementing, and maintaining scalable data infrastructure on Amazon Web Services (AWS). The ideal candidate has a strong background in data engineering, with a focus on cloud-based solutions, and is proficient in leveraging AWS services to build and optimize data pipelines, data lakes, and ETL processes. You will work closely with data scientists, analysts, and stakeholders to ensure data availability, reliability, and security for our data-driven applications.
Roles & Responsibilities:
Key Responsibilities:
• Design and Development: Design, develop, and implement data pipelines using AWS services such as AWS Glue, Lambda, S3, Kinesis, and Redshift to process large-scale data.
• ETL Processes: Build and maintain robust ETL processes for efficient data extraction, transformation, and loading, ensuring data quality and integrity across systems.
• Data Warehousing: Design and manage data warehousing solutions on AWS, particularly with Redshift, for optimized storage, querying, and analysis of structured and semi-structured data.
• Data Lake Management: Implement and manage scalable data lake solutions using AWS S3, Glue, and related services to support structured, unstructured, and streaming data.
• Data Security: Implement data security best practices on AWS, including access control, encryption, and compliance with data privacy regulations.
• Optimization and Monitoring: Optimize data workflows and storage solutions for cost and performance. Set up monitoring, logging, and alerting for data pipelines and infrastructure health.
• Collaboration: Work closely with data scientists, analysts, and business stakeholders to understand data needs and deliver data solutions aligned with business goals.
• Documentation: Create and maintain documentation for data infrastructure, data pipelines, and ETL processes to support internal knowledge sharing and compliance.
Base Salary Range: $100,000 - $130,000 per annum
TCS Employee Benefits Summary:
Discretionary Annual Incentive.
Comprehensive Medical Coverage: Medical & Health, Dental & Vision, Disability Planning & Insurance, Pet Insurance Plans.
Family Support: Maternal & Parental Leaves.
Insurance Options: Auto & Home Insurance, Identity Theft Protection.
Convenience & Professional Growth: Commuter Benefits & Certification & Training Reimbursement.
Time Off: Vacation, Time Off, Sick Leave & Holidays.
Legal & Financial Assistance: Legal Assistance, 401K Plan, Performance Bonus, College Fund, Student Loan Refinancing.
Data Governance Engineer
Phoenix, AZ jobs
Role: Data Governance Engineer
Experience Required - 6+ Years
Must Have Technical/Functional Skills
• Understanding of Data Management and Data Governance concepts (metadata, lineage, data quality, etc.) and prior experience.
• 2 - 5 years of Data Quality Management experience.
• Intermediate competency in SQL & Python or related programming language.
• Strong familiarity with data architecture and/or data modeling concepts
• 2 - 5 years of experience with Agile or SAFe project methodologies
Roles & Responsibilities
• Assist in identifying data-related risks and associated controls for key business processes. Risks relate to Record Retention, Data Quality, Data Movement, Data Stewardship, Data Protection, Data Sharing, among others.
• Identify data quality issues, perform root-cause-analysis of data quality issues and drive remediation of audit and regulatory feedback.
• Develop deep understanding of key enterprise data-related policies and serve as the policy expert for the business unit, providing education to teams regarding policy implications for business.
• Responsible for holistic platform data quality monitoring, including but not limited to critical data elements.
• Collaborate with and influence product managers to ensure all new use cases are managed according to policies.
• Influence and contribute to strategic improvements to data assessment processes and analytical tools.
• Responsible for monitoring data quality issues, communicating issues, and driving resolution.
• Support current regulatory reporting needs via existing platforms, working with upstream data providers, downstream business partners, as well as technology teams.
• Subject matter expertise on multiple platforms.
• Responsible to partner with the Data Steward Manager in developing and managing the data compliance roadmap.
Generic Managerial Skills, If any
• Drives Innovation & Change: Provides systematic and rational analysis to identify the root cause of problems. Is prepared to challenge the status quo and drive innovation. Makes informed judgments, recommends tailored solutions.
• Leverages Team - Collaboration: Coordinates efforts within and across teams to deliver goals, accountable to bring in ideas, information, suggestions, and expertise from others outside & inside the immediate team.
• Communication: Influences and holds others accountable and has ability to convince others. Identifies the specific data governance requirements and is able to communicate clearly and in a compelling way.
Interested candidates please do share me your updated resume to *******************
Salary Range - $100,000 to $120,000 per year
TCS Employee Benefits Summary:
Discretionary Annual Incentive.
Comprehensive Medical Coverage: Medical & Health, Dental & Vision, Disability Planning & Insurance, Pet Insurance Plans.
Family Support: Maternal & Parental Leaves.
Insurance Options: Auto & Home Insurance, Identity Theft Protection.
Convenience & Professional Growth: Commuter Benefits & Certification & Training Reimbursement.
Time Off: Vacation, Time Off, Sick Leave & Holidays.
Legal & Financial Assistance: Legal Assistance, 401K Plan, Performance Bonus, College Fund, Student Loan Refinancing.
Distributed Systems Engineer / AI Workloads
Sunnyvale, CA jobs
We are actively searching for a Distributed Systems Engineer to join our team on a permanent basis. In this founding engineer role you will focus on building next-generation data infrastructure for our AI platform. If you have a passion for distributed systems, unified storage, orchestration, and retrieval for AI workloads we would love to speak with you. Our office is located in downtown SF and we collaborate two days a week onsite.
Your Rhythm:
Design, build, and maintain data infrastructure systems such as distributed compute, data orchestration, distributed storage, streaming infrastructure, machine learning infrastructure while ensuring scalability, reliability, and security
Ensure our data platform can scale by orders of magnitude while remaining reliable and efficient
Tackle complex challenges in distributed systems, databases, and AI infrastructure
Collaborate with technical leadership to define and refine the product roadmap
Write high-quality, well-tested, and maintainable code
Contribute to the open-source community and engage with developers in the space
Your Vibe:
3+ years of professional distributed database systems experience
Expertise in building and operating scalable, reliable and secure database infrastructure systems
Strong knowledge around distributed compute, data orchestration, distributed storage, streaming infrastructure
Strong knowledge of SQL and NoSQL databases, such as MySQL, Postgres, and MongoDB.
Programming skills in Python
Passion for building developer tools and scalable infrastructure
Available to collaborate onsite 2 days a week
Our Vibe:
Relaxed work environment
100% paid top of the line health care benefits
Full ownership, no micro management
Strong equity package
401K
Unlimited vacation
An actual work/life balance, we aren't trying to run you into the ground. We have families and enjoy life too!
Distributed Systems Engineer / AI Workloads
San Mateo, CA jobs
We are actively searching for a Distributed Systems Engineer to join our team on a permanent basis. In this founding engineer role you will focus on building next-generation data infrastructure for our AI platform. If you have a passion for distributed systems, unified storage, orchestration, and retrieval for AI workloads we would love to speak with you. Our office is located in downtown SF and we collaborate two days a week onsite.
Your Rhythm:
Design, build, and maintain data infrastructure systems such as distributed compute, data orchestration, distributed storage, streaming infrastructure, machine learning infrastructure while ensuring scalability, reliability, and security
Ensure our data platform can scale by orders of magnitude while remaining reliable and efficient
Tackle complex challenges in distributed systems, databases, and AI infrastructure
Collaborate with technical leadership to define and refine the product roadmap
Write high-quality, well-tested, and maintainable code
Contribute to the open-source community and engage with developers in the space
Your Vibe:
3+ years of professional distributed database systems experience
Expertise in building and operating scalable, reliable and secure database infrastructure systems
Strong knowledge around distributed compute, data orchestration, distributed storage, streaming infrastructure
Strong knowledge of SQL and NoSQL databases, such as MySQL, Postgres, and MongoDB.
Programming skills in Python
Passion for building developer tools and scalable infrastructure
Available to collaborate onsite 2 days a week
Our Vibe:
Relaxed work environment
100% paid top of the line health care benefits
Full ownership, no micro management
Strong equity package
401K
Unlimited vacation
An actual work/life balance, we aren't trying to run you into the ground. We have families and enjoy life too!
Distributed Systems Engineer / AI Workloads
Fremont, CA jobs
We are actively searching for a Distributed Systems Engineer to join our team on a permanent basis. In this founding engineer role you will focus on building next-generation data infrastructure for our AI platform. If you have a passion for distributed systems, unified storage, orchestration, and retrieval for AI workloads we would love to speak with you. Our office is located in downtown SF and we collaborate two days a week onsite.
Your Rhythm:
Design, build, and maintain data infrastructure systems such as distributed compute, data orchestration, distributed storage, streaming infrastructure, machine learning infrastructure while ensuring scalability, reliability, and security
Ensure our data platform can scale by orders of magnitude while remaining reliable and efficient
Tackle complex challenges in distributed systems, databases, and AI infrastructure
Collaborate with technical leadership to define and refine the product roadmap
Write high-quality, well-tested, and maintainable code
Contribute to the open-source community and engage with developers in the space
Your Vibe:
3+ years of professional distributed database systems experience
Expertise in building and operating scalable, reliable and secure database infrastructure systems
Strong knowledge around distributed compute, data orchestration, distributed storage, streaming infrastructure
Strong knowledge of SQL and NoSQL databases, such as MySQL, Postgres, and MongoDB.
Programming skills in Python
Passion for building developer tools and scalable infrastructure
Available to collaborate onsite 2 days a week
Our Vibe:
Relaxed work environment
100% paid top of the line health care benefits
Full ownership, no micro management
Strong equity package
401K
Unlimited vacation
An actual work/life balance, we aren't trying to run you into the ground. We have families and enjoy life too!