Founding Data Scientist (GTM)
San Francisco, CA jobs
An early-stage investment of ours is looking to make their first IC hire in data science. This company builds tools that help teams understand how their AI systems perform and improve them over time (and they already have a lot of enterprise customers).
We're looking for a Sr Data Scientist to lead analytics for sales, marketing, and customer success. The job is about finding insights in data, running analyses and experiments, and helping the business make better decisions.
Responsibilities:
Analyze data to improve how the company finds, converts, and supports customers
Create models that predict lead quality, conversion, and customer value
Build clear dashboards and reports for leadership
Work with teams across the company to answer key questions
Take initiative, communicate clearly, and dig into data to solve problems
Try new methods and tools to keep improving the company's GTM approach
Qualifications:
5+ years related industry experience working with data and supporting business teams.
Solid experience analyzing GTM or revenue-related data
Strong skills in SQL and modern analytics tools (Snowflake, Hex, dbt etc.)
Comfortable owning data workflows-from cleaning and modeling to presenting insights.
Able to work independently, prioritize well, and move projects forward without much direction
Clear thinker and communicator who can turn data into actionable recommendations
Adaptable and willing to learn new methods in a fast-paced environment
About Us:
Greylock is an early-stage investor in hundreds of remarkable companies including Airbnb, LinkedIn, Dropbox, Workday, Cloudera, Facebook, Instagram, Roblox, Coinbase, Palo Alto Networks, among others. More can be found about us here: *********************
How We Work:
We are full-time, salaried employees of Greylock and provide free candidate referrals/introductions to our active investments. We will contact anyone who looks like a potential match--requesting to schedule a call with you immediately.
Due to the selective nature of this service and the volume of applicants we typically receive from our job postings, a follow-up email will not be sent until a match is identified with one of our investments.
Please note: We are not recruiting for any roles within Greylock at this time. This job posting is for direct employment with a startup in our portfolio.
Data Modeling
Melbourne, FL jobs
Must Have Technical/Functional Skills
• 5+ years of experience in data modeling, data architecture, or a similar role
• Proficiency in SQL and experience with relational databases such as Oracle, SQL Server, or PostgreSQL
• Experience with data modeling tools such as Erwin, IBM Infosphere Data Architect, or similar
• Ability to communicate complex concepts clearly to diverse audiences
Roles & Responsibilities
• Design and develop conceptual, logical, and physical data models that support both operational and analytical needs
• Collaborate with business stakeholders to gather requirements and translate them into scalable data models
• Perform data profiling and analysis to understand data quality issues and identify opportunities for improvement
• Implement best practices for data modeling, including normalization, denormalization, and indexing strategies
• Lead data architecture discussions and present data modeling solutions to technical and non-technical audiences
• Mentor and guide junior data modelers and data architects within the team
• Continuously evaluate data modeling tools and techniques to enhance team efficiency and productivity
Base Salary Range: $100,000 - $150,000 per annum
TCS Employee Benefits Summary:
Discretionary Annual Incentive.
Comprehensive Medical Coverage: Medical & Health, Dental & Vision, Disability Planning & Insurance, Pet Insurance Plans.
Family Support: Maternal & Parental Leaves.
Insurance Options: Auto & Home Insurance, Identity Theft Protection.
Convenience & Professional Growth: Commuter Benefits & Certification & Training Reimbursement.
Time Off: Vacation, Time Off, Sick Leave & Holidays.
Legal & Financial Assistance: Legal Assistance, 401K Plan, Performance Bonus, College Fund, Student Loan Refinancing.
Data Engineer
Atlanta, GA jobs
No C2C
We're looking for a hands-on Data Engineer to help build, scale, and fine-tune real-time data systems using Kafka, AWS, and a modern data stack. In this role, you'll work deeply with streaming data, ETL, distributed systems, and PostgreSQL to power analytics, product innovation, and AI-driven use cases. You'll also get to work with AI/ML frameworks, automation, and MLOps tools to support advanced modeling and a highly responsive data platform.
What You'll Do
Design and build real-time streaming pipelines using Kafka, Confluent Schema Registry, and Zookeeper
Build and manage cloud-based data workflows using AWS services like Glue, EMR, EC2, and S3
Optimize and maintain PostgreSQL and other databases with strong schema design, advanced SQL, and performance tuning
Integrate AI and ML frameworks (TensorFlow, PyTorch, Hugging Face) into data pipelines for training and inference
Automate data quality checks, feature generation, and anomaly detection using AI-powered monitoring and observability tools
Partner with ML engineers to deploy, monitor, and continuously improve machine learning models in both batch and real-time pipelines using tools like MLflow, SageMaker, Airflow, and Kubeflow
Experiment with vector databases and retrieval-augmented generation (RAG) pipelines to support GenAI and LLM initiatives
Build scalable, cloud-native, event-driven architectures that power AI-driven data products
What You Bring
Bachelor's degree in Computer Science, Engineering, Math, or a related technical field
3+ years of hands-on data engineering experience with Kafka (Confluent or open-source) and AWS
Experience with automated data quality, monitoring, and observability tools
Strong SQL skills and solid database fundamentals with PostgreSQL and both traditional and NoSQL databases
Proficiency in Python, Scala, or Java for pipeline development and AI integrations
Experience with synthetic data generation, vector databases, or GenAI-powered data products
Hands-on experience integrating ML models into production data pipelines using frameworks like PyTorch or TensorFlow and MLOps tools such as Airflow, MLflow, SageMaker, or Kubeflow
Data Engineer
Houston, TX jobs
We are looking for a talented and motivated Python Data Engineers. We need help expanding our data assets in support of our analytical capabilities in a full-time role. This role will have the opportunity to interface directly with our traders, analysts, researchers and data scientists to drive out requirements and deliver a wide range of data related needs.
What you will do:
- Translate business requirements into technical deliveries. Drive out requirements for data ingestion and access
- Maintain the cleanliness of our Python codebase, while adhering to existing designs and coding conventions as much as possible
- Contribute to our developer tools and Python ETL toolkit, including standardization and consolidation of core functionality
- Efficiently coordinate with the rest of our team in different locations
Qualifications
- 6+ years of enterprise-level coding experience with Python
- Computer Science, MIS or related degree
- Familiarity with Pandas and NumPy packages
- Experience with Data Engineering and building data pipelines
- Experience scraping websites with Requests, Beautiful Soup, Selenium, etc.
- Strong understating of object-oriented design, design patterns, SOA architectures
- Proficient understanding of peer-reviewing, code versioning, and bug/issue tracking tools.
- Strong communication skills
- Familiarity with containerization solutions like Docker and Kubernetes is a plus
Data Engineer
Irvine, CA jobs
Thank you for stopping by to take a look at the Data Integration Engineer role I posted here on LinkedIN, I appreciate it.
If you have read my s in the past, you will recognize how I write job descriptions. If you are new, allow me to introduce myself. My name is Tom Welke. I am Partner & VP at RSM Solutions, Inc and I have been recruiting technical talent for more than 23 years and been in the tech space since the 1990s. Due to this, I actually write JD's myself...no AI, no 'bots', just a real live human. I realized a while back that looking for work is about as fun as a root canal with no anesthesia...especially now. So, rather than saying 'must work well with others' and 'team mindset', I do away with that kind of nonsense and just tell it like it is.
So, as with every role I work on, social fit is almost as important as technical fit. For this one, technical fit is very very important. But, we also have some social fit characteristics that are important. This is the kind of place that requires people to dive in and learn. The hiring manager for this one is actually a very dear friend of mine. He said something interesting to me not all that long ago. He mentioned, if you aren't spending at least an hour a day learning something new, you really are doing yourself a disservice. This is that classic environment where no one says 'this is not my job'. So that ability to jump in and help is needed for success in this role.
This role is being done onsite in Irvine, California. I prefer working with candidates that are already local to the area. If you need to relocate, that is fine, but there are no relocation dollars available.
I can only work with US Citizens or Green Card Holders for this role. I cannot work with H1, OPT, EAD, F1, H4, or anyone that is not already a US Citizen or Green Card Holder for this role.
The Data Engineer role is similar to the Data Integration role I posted. However, this one is mor Ops focused, with the orchestration of deployment and ML flow, and including orchestrating and using data on the clusters and managing how the models are performing. This role focuses on coding & configuring on the ML side of the house.
You will be designing, automating, and observing end to end data pipelines that feed this client's Kubeflow driven machine learning platform, ensuring models are trained, deployed, and monitored on trustworthy, well governed data. You will build batch/stream workflows, wire them into Azure DevOps CI/CD, and surface real time health metrics in Prometheus + Grafana dashboards to guarantee data availability. The role bridges Data Engineering and MLOps, allowing data scientists to focus on experimentation and the business sees rapid, reliable predictive insight.
Here are some of the main responsibilities:
Design and implement batch and streaming pipelines in Apache Spark running on Kubernetes and Kubeflow Pipelines to hydrate feature stores and training datasets.
Build high throughput ETL/ELT jobs with SSIS, SSAS, and T SQL against MS SQL Server, applying Data Vault style modeling patterns for auditability.
Integrate source control, build, and release automation using GitHub Actions and Azure DevOps for every pipeline component.
Instrument pipelines with Prometheus exporters and visualize SLA, latency, and error budget metrics to enable proactive alerting.
Create automated data quality and schema drift checks; surface anomalies to support a rapid incident response process.
Use MLflow Tracking and Model Registry to version artifacts, parameters, and metrics for reproducible experiments and safe rollbacks.
Work with data scientists to automate model retraining and deployment triggers within Kubeflow based on data freshness or concept drift signals.
Develop PowerShell and .NET utilities to orchestrate job dependencies, manage secrets, and publish telemetry to Azure Monitor.
Optimize Spark and SQL workloads through indexing, partitioning, and cluster sizing strategies, benchmarking performance in CI pipelines.
Document lineage, ownership, and retention policies; ensure pipelines conform to PCI/SOX and internal data governance standards.
Here is what we are seeking:
At least 6 years of experience building data pipelines in Spark or equivalent.
At least 2 years deploying workloads on Kubernetes/Kubeflow.
At least 2 years of experience with MLflow or similar experiment‑tracking tools.
At least 6 years of experience in T‑SQL, Python/Scala for Spark.
At least 6 years of PowerShell/.NET scripting.
At least 6 years of experience with with GitHub, Azure DevOps, Prometheus, Grafana, and SSIS/SSAS.
Kubernetes CKA/CKAD, Azure Data Engineer (DP‑203), or MLOps‑focused certifications (e.g., Kubeflow or MLflow) would be great to see.
Mentor engineers on best practices in containerized data engineering and MLOps.
Data Platform Engineer / AI Workloads
San Jose, CA jobs
We are actively searching for a Data Infrastructure Engineer to join our team on a permanent basis. In this founding engineer role you will focus on building next-generation data infrastructure for our AI platform. If you have a passion for distributed systems, unified storage, orchestration, and retrieval for AI workloads we would love to speak with you.
Your Rhythm:
Design, build, and maintain data infrastructure systems such as distributed compute, data orchestration, distributed storage, streaming infrastructure, machine learning infrastructure while ensuring scalability, reliability, and security
Ensure our data platform can scale by orders of magnitude while remaining reliable and efficient
Tackle complex challenges in distributed systems, databases, and AI infrastructure
Collaborate with technical leadership to define and refine the product roadmap
Write high-quality, well-tested, and maintainable code
Contribute to the open-source community and engage with developers in the space
Your Vibe:
5+ years experience designing building distributed database systems
Expertise in building and operating scalable, reliable and secure database infrastructure systems
Strong knowledge around distributed compute, data orchestration, distributed storage, streaming infrastructure
Strong knowledge of SQL and NoSQL databases, such as MySQL, Postgres, and MongoDB.
Programming skills in Python
Passion for building developer tools and scalable infrastructure
Our Vibe:
Relaxed work environment
100% paid top of the line health care benefits
Full ownership, no micro management
Strong equity package
401K
Unlimited vacation
An actual work/life balance, we aren't trying to run you into the ground. We have families and enjoy life too!
Data Conversion Engineer
Charlotte, NC jobs
Summary/Objective
Are you looking to work at a high growth, innovative, and purpose driven FinTech company? If so, you'll love Paymentus. Recognized by Deloitte as one of the fastest growing companies in North America, Paymentus is the premier provider of innovative, reliable, and secure electronic bill payment and presentment for more than 1700 clients. We are a SaaS provider that enables companies to help their customers simplify their financial lives. We do that by making it easier for consumers and businesses to pay bills, plus move and manage money to achieve strong financial health. We continually build upon a massively scalable platform, supporting thousands of businesses and millions of transactions on a daily basis. We're looking for high performers to join our team who excel in their expertise and who can transform plans into action. You'll have the opportunity to grow in an environment where intelligence, innovation, and leadership are valued and rewarded.
About the Role
The Data Conversion Engineer serves as a key component of the Platform Integrations team, providing technical support and guidance on data conversion projects. Conversions are an integral part in ensuring adherence to Paymentus' standards for a successful launch. This role is essential to ensure all bill payment data converts properly and efficiently onto the Paymentus platform.
Responsibilities
Develop data conversion procedures using SQL, Java and Linux scripting
Augment and automate existing manual procedures to optimize accuracy and reduce time for each conversion
Develop and update conversion mappers to interpret incoming data and manipulate it to match Paymentus' specifications
Develop new specifications to satisfy new customers and products
Serve as the primary point of contact/driver for all technical related conversion activities
Review conversion calendar and offer technical support and solutions to meet deadlines and contract dates
Maintain and update technical conversion documentation to share with internal and external clients and partners
Work in close collaboration with implementation, integration, product and development teams using exceptional communication skills
Adapt and creatively solve encountered problems under high stress and tight deadlines
Learn database structure, business logic and combine all knowledge to improve processes
Be flexible
Monitor new client conversions and existing client support if needed; provide daily problem solving, coordination, and communication
Management of multiple projects and conversion implementations
Ability to proactively troubleshoot and solve problems with limited supervision
Qualifications
B.S. Degree in Computer Science or comparable experience
Strong knowledge of Linux and the command line interface
Exceptional SQL skills
Experience with logging/monitoring tools (AWS Cloudwatch, Splunk, ELK, etc.)
Familiarity with various online banking applications and understanding of third-party integrations is a plus
Effective written and verbal communication skills
Problem Solver - recognizes the need to resolve issues quickly and effectively, uses logic to solve problems; identifies problems and brings forward multiple solution options; knows who/when to involve appropriate people when troubleshooting issues
Communication; ability to use formal and informal written and/or verbal communication channels to inform others; articulates ideas and thoughts clearly both verbally and in writing
Dynamic and self-motivated; able to work on their own initiative and deliver the objectives required to maintain service levels
Strong attention to detail
Proficiency with raw data, analytics, or data reporting tools
Preferred Skills
Background in the Payments, Banking, E-Commerce, Finance and/or Utility industries
Experience with front end web interfaces (HTML5, Javascript, CSS3)
Cloud technologies (AWS, GCP, Azure)
Work Environment
This job operates in a professional office environment. This role routinely uses standard office equipment such as laptop computers, photocopiers and smartphones.
Physical Demands
This role requires sitting or standing at a computer workstation for extended periods of time.
Position Type/Expected Hours of Work
This is a full-time position. Days and hours of work are Monday through Friday, 40 hours a week. Occasional evening and weekend work may be required as job duties demand.
Travel
No travel is required for this position.
Other Duties
Please note this job description is not designed to cover or contain a comprehensive listing of activities, duties or responsibilities that are required of the employee for this job. Duties, responsibilities and activities may change at any time with or without notice.
Equal Opportunity Statement
Paymentus is an equal opportunity employer. We enthusiastically accept our responsibility to make employment decisions without regard to race, religious creed, color, age, sex, sexual orientation, national origin, ancestry, citizenship status, religion, marital status, disability, military service or veteran status, genetic information, medical condition including medical characteristics, or any other classification protected by applicable federal, state, and local laws and ordinances. Our management is dedicated to ensuring the fulfillment of this policy with respect to hiring, placement, promotion, transfer, demotion, layoff, termination, recruitment advertising, pay, and other forms of compensation, training, and general treatment during employment.
Reasonable Accommodation
Paymentus recognizes and supports its obligation to endeavor to accommodate job applicants and employees with known physical or mental disabilities who are able to perform the essential functions of the position, with or without reasonable accommodation. Paymentus will endeavor to provide reasonable accommodations to otherwise qualified job applicants and employees with known physical or mental disabilities, unless doing so would impose an undue hardship on the Company or pose a direct threat of substantial harm to the employee or others. An applicant or employee who believes he or she needs a reasonable accommodation of a disability should discuss the need for possible accommodation with the Human Resources Department, or his or her direct supervisor.
Sr. Data Engineer
Dallas, TX jobs
Trinity Industries is searching for a Sr. Data Engineer to join our Data Analytics team in Dallas, TX! The successful candidate will work with the Trinity Rail teams to develop and maintain data pipelines in Azure utilizing Databricks, Python and SQL.
Join our team today and be a part of Delivering Goods for the Good of All!
What you'll do:
Facilitate technical design of complex data sourcing, transformation and aggregation logic, ensuring business analytics requirements are met
Work with leadership to prioritize business and information needs
Engage with product and app development teams to gather requirements, and create technical requirements
Utilize and implement data engineering best practices and coding strategies
Be responsible for data ingress into storage
What you'll need:
Bachelor's Degree Computer Science, Information Management, or related field required; Masters preferred
8+ years in data engineering including prior experience in data transformation
Databricks experience building data pipelines using the medallion architecture, bronze to gold
Advanced skills in Spark and structured streaming, SQL, Python
Technical expertise regarding data models, database design/development, data mining and other segmentation techniques
Experience with data conversion, interface and report development
Experience working with IOT and/or geospatial data in a cloud environment (Azure)
Adept at queries, report writing and presenting findings
Prior experience coding utilizing repositories and multiple coding environments
Must possess effective communication skills, both verbal and written
Strong organizational, time management and multi-tasking skills
Experience with data conversion, interface and report development
Adept at queries, report writing and presenting findings
Process improvement and automation a plus
Nice to have:
Databricks Data Engineering Associate or Professional Certification > 2023
Data Platform Engineer / AI Workloads
Santa Rosa, CA jobs
We are actively searching for a Data Infrastructure Engineer to join our team on a permanent basis. In this founding engineer role you will focus on building next-generation data infrastructure for our AI platform. If you have a passion for distributed systems, unified storage, orchestration, and retrieval for AI workloads we would love to speak with you.
Your Rhythm:
Design, build, and maintain data infrastructure systems such as distributed compute, data orchestration, distributed storage, streaming infrastructure, machine learning infrastructure while ensuring scalability, reliability, and security
Ensure our data platform can scale by orders of magnitude while remaining reliable and efficient
Tackle complex challenges in distributed systems, databases, and AI infrastructure
Collaborate with technical leadership to define and refine the product roadmap
Write high-quality, well-tested, and maintainable code
Contribute to the open-source community and engage with developers in the space
Your Vibe:
5+ years experience designing building distributed database systems
Expertise in building and operating scalable, reliable and secure database infrastructure systems
Strong knowledge around distributed compute, data orchestration, distributed storage, streaming infrastructure
Strong knowledge of SQL and NoSQL databases, such as MySQL, Postgres, and MongoDB.
Programming skills in Python
Passion for building developer tools and scalable infrastructure
Our Vibe:
Relaxed work environment
100% paid top of the line health care benefits
Full ownership, no micro management
Strong equity package
401K
Unlimited vacation
An actual work/life balance, we aren't trying to run you into the ground. We have families and enjoy life too!
Data Platform Engineer / AI Workloads
San Francisco, CA jobs
We are actively searching for a Data Infrastructure Engineer to join our team on a permanent basis. In this founding engineer role you will focus on building next-generation data infrastructure for our AI platform. If you have a passion for distributed systems, unified storage, orchestration, and retrieval for AI workloads we would love to speak with you.
Your Rhythm:
Design, build, and maintain data infrastructure systems such as distributed compute, data orchestration, distributed storage, streaming infrastructure, machine learning infrastructure while ensuring scalability, reliability, and security
Ensure our data platform can scale by orders of magnitude while remaining reliable and efficient
Tackle complex challenges in distributed systems, databases, and AI infrastructure
Collaborate with technical leadership to define and refine the product roadmap
Write high-quality, well-tested, and maintainable code
Contribute to the open-source community and engage with developers in the space
Your Vibe:
5+ years experience designing building distributed database systems
Expertise in building and operating scalable, reliable and secure database infrastructure systems
Strong knowledge around distributed compute, data orchestration, distributed storage, streaming infrastructure
Strong knowledge of SQL and NoSQL databases, such as MySQL, Postgres, and MongoDB.
Programming skills in Python
Passion for building developer tools and scalable infrastructure
Our Vibe:
Relaxed work environment
100% paid top of the line health care benefits
Full ownership, no micro management
Strong equity package
401K
Unlimited vacation
An actual work/life balance, we aren't trying to run you into the ground. We have families and enjoy life too!
Data Platform Engineer / AI Workloads
Fremont, CA jobs
We are actively searching for a Data Infrastructure Engineer to join our team on a permanent basis. In this founding engineer role you will focus on building next-generation data infrastructure for our AI platform. If you have a passion for distributed systems, unified storage, orchestration, and retrieval for AI workloads we would love to speak with you.
Your Rhythm:
Design, build, and maintain data infrastructure systems such as distributed compute, data orchestration, distributed storage, streaming infrastructure, machine learning infrastructure while ensuring scalability, reliability, and security
Ensure our data platform can scale by orders of magnitude while remaining reliable and efficient
Tackle complex challenges in distributed systems, databases, and AI infrastructure
Collaborate with technical leadership to define and refine the product roadmap
Write high-quality, well-tested, and maintainable code
Contribute to the open-source community and engage with developers in the space
Your Vibe:
5+ years experience designing building distributed database systems
Expertise in building and operating scalable, reliable and secure database infrastructure systems
Strong knowledge around distributed compute, data orchestration, distributed storage, streaming infrastructure
Strong knowledge of SQL and NoSQL databases, such as MySQL, Postgres, and MongoDB.
Programming skills in Python
Passion for building developer tools and scalable infrastructure
Our Vibe:
Relaxed work environment
100% paid top of the line health care benefits
Full ownership, no micro management
Strong equity package
401K
Unlimited vacation
An actual work/life balance, we aren't trying to run you into the ground. We have families and enjoy life too!
AWS Data Engineer
Seattle, WA jobs
Must Have Technical/Functional Skills:
We are seeking an experienced AWS Data Engineer to join our data team and play a crucial role in designing, implementing, and maintaining scalable data infrastructure on Amazon Web Services (AWS). The ideal candidate has a strong background in data engineering, with a focus on cloud-based solutions, and is proficient in leveraging AWS services to build and optimize data pipelines, data lakes, and ETL processes. You will work closely with data scientists, analysts, and stakeholders to ensure data availability, reliability, and security for our data-driven applications.
Roles & Responsibilities:
Key Responsibilities:
• Design and Development: Design, develop, and implement data pipelines using AWS services such as AWS Glue, Lambda, S3, Kinesis, and Redshift to process large-scale data.
• ETL Processes: Build and maintain robust ETL processes for efficient data extraction, transformation, and loading, ensuring data quality and integrity across systems.
• Data Warehousing: Design and manage data warehousing solutions on AWS, particularly with Redshift, for optimized storage, querying, and analysis of structured and semi-structured data.
• Data Lake Management: Implement and manage scalable data lake solutions using AWS S3, Glue, and related services to support structured, unstructured, and streaming data.
• Data Security: Implement data security best practices on AWS, including access control, encryption, and compliance with data privacy regulations.
• Optimization and Monitoring: Optimize data workflows and storage solutions for cost and performance. Set up monitoring, logging, and alerting for data pipelines and infrastructure health.
• Collaboration: Work closely with data scientists, analysts, and business stakeholders to understand data needs and deliver data solutions aligned with business goals.
• Documentation: Create and maintain documentation for data infrastructure, data pipelines, and ETL processes to support internal knowledge sharing and compliance.
Base Salary Range: $100,000 - $130,000 per annum
TCS Employee Benefits Summary:
Discretionary Annual Incentive.
Comprehensive Medical Coverage: Medical & Health, Dental & Vision, Disability Planning & Insurance, Pet Insurance Plans.
Family Support: Maternal & Parental Leaves.
Insurance Options: Auto & Home Insurance, Identity Theft Protection.
Convenience & Professional Growth: Commuter Benefits & Certification & Training Reimbursement.
Time Off: Vacation, Time Off, Sick Leave & Holidays.
Legal & Financial Assistance: Legal Assistance, 401K Plan, Performance Bonus, College Fund, Student Loan Refinancing.
Data Governance Engineer
Phoenix, AZ jobs
Role: Data Governance Engineer
Experience Required - 6+ Years
Must Have Technical/Functional Skills
• Understanding of Data Management and Data Governance concepts (metadata, lineage, data quality, etc.) and prior experience.
• 2 - 5 years of Data Quality Management experience.
• Intermediate competency in SQL & Python or related programming language.
• Strong familiarity with data architecture and/or data modeling concepts
• 2 - 5 years of experience with Agile or SAFe project methodologies
Roles & Responsibilities
• Assist in identifying data-related risks and associated controls for key business processes. Risks relate to Record Retention, Data Quality, Data Movement, Data Stewardship, Data Protection, Data Sharing, among others.
• Identify data quality issues, perform root-cause-analysis of data quality issues and drive remediation of audit and regulatory feedback.
• Develop deep understanding of key enterprise data-related policies and serve as the policy expert for the business unit, providing education to teams regarding policy implications for business.
• Responsible for holistic platform data quality monitoring, including but not limited to critical data elements.
• Collaborate with and influence product managers to ensure all new use cases are managed according to policies.
• Influence and contribute to strategic improvements to data assessment processes and analytical tools.
• Responsible for monitoring data quality issues, communicating issues, and driving resolution.
• Support current regulatory reporting needs via existing platforms, working with upstream data providers, downstream business partners, as well as technology teams.
• Subject matter expertise on multiple platforms.
• Responsible to partner with the Data Steward Manager in developing and managing the data compliance roadmap.
Generic Managerial Skills, If any
• Drives Innovation & Change: Provides systematic and rational analysis to identify the root cause of problems. Is prepared to challenge the status quo and drive innovation. Makes informed judgments, recommends tailored solutions.
• Leverages Team - Collaboration: Coordinates efforts within and across teams to deliver goals, accountable to bring in ideas, information, suggestions, and expertise from others outside & inside the immediate team.
• Communication: Influences and holds others accountable and has ability to convince others. Identifies the specific data governance requirements and is able to communicate clearly and in a compelling way.
Interested candidates please do share me your updated resume to *******************
Salary Range - $100,000 to $120,000 per year
TCS Employee Benefits Summary:
Discretionary Annual Incentive.
Comprehensive Medical Coverage: Medical & Health, Dental & Vision, Disability Planning & Insurance, Pet Insurance Plans.
Family Support: Maternal & Parental Leaves.
Insurance Options: Auto & Home Insurance, Identity Theft Protection.
Convenience & Professional Growth: Commuter Benefits & Certification & Training Reimbursement.
Time Off: Vacation, Time Off, Sick Leave & Holidays.
Legal & Financial Assistance: Legal Assistance, 401K Plan, Performance Bonus, College Fund, Student Loan Refinancing.
Sr Data Analyst
Dallas, TX jobs
Primary responsibilities of the Senior Data Analyst include supporting and analyzing data anomalies for multiple environments including but not limited to Data Warehouse, ODS, Data Replication/ETL Data Management initiatives. The candidate will be in a supporting role and will work closely with Business, DBA, ETL and Data Management team providing analysis and support for complex Data related initiatives. This individual will also be responsible for assisting in initial setup and on-going documentation/configuration related to Data Governance and Master Data Management solutions. This candidate must have a passion for data, along with good SQL, analytical and communication skills.
Responsibilities
Investigate and Analyze data anomalies and data issues reported by Business
Work with ETL, Replication and DBA teams to determine data transformations, data movement and derivations and document accordingly
Work with support teams to ensure consistent and pro-active support methodologies are adhered to for all aspects of data movements and data transformations
Assist in break fix and production validation as it relates to data derivations, replication and structures
Assist in configuration and on-going setup of Data Virtualization and Master Data Management tools
Assist in keeping documentation up to date as it relates to Data Standardization definitions, Data Dictionary and Data Lineage
Gather information from various Sources and interpret Patterns and Trends
Ability to work in a team-oriented, fast-paced agile environment managing multiple
priorities
Qualifications
4+ years of SQL experience working in OLTP, Data Warehouse and Big Data databases
4+ years of experience working with Exadata and SQL Server databases
4+ years in a Data Analyst role
Strong attention to Detail
2+ years writing medium to complex stored procedures a plus
Ability to collaborate effectively and work as part of a team
Extensive background in writing complex queries
Extensive working knowledge of all aspects of Data Movement and Processing, including ETL, API, OLAP and best practices for data tracking
Good Communication skills
Self-Motivated
Works well in a team environment
Denodo Experience a plus
Master Data Management a plus
Big Data Experience a plus (Hadoop, MongoDB)
Postgres and Cloud Experience a plus
Sr. Data Scientist
Atlanta, GA jobs
Job DescriptionSr. Data Scientist (100% remote) ARC Group is currently seeking a Sr. Data Scientist to join a global leader in shipping and enterprise logistics services. The Data Scientist will be an integral part in helping the organization by working with numerous departments to develop solutions for customers, make sure products are as needed, and that quality and performance of products are met.
This is NOT a junior position nor a developer role, it is a need for a Data Scientist that will look at arrays of capture points and customer data, model and analyze date to create better solutions and solve problems.
This requires you to have permanent work authorization and not need sponsoring now or in the future. (No C2C, no brokering).
Data Scientist Responsibilities:
Align with SMEs to outline analytic requirements and devising the analytics that meet the requirements
Develop data models top optimize and improve work of e-commerce functions
Understand the flow of data in the domestic product portfolio and define new solutions to capture the right data to help measure performance
Recognize emerging machine learning and pattern recognition algorithms and work with the team to integrate state-of-the-art algorithms into various solutions including product performance
Become a SME for all domestic product portfolio data sources and help define interfaces across various data points to consolidate to produce required analytics
Gain industry knowledge to understand and lead analyses of customer injection, market trends and competitive landscape
Data Scientist Requirements:
Bachelor's (master's is a plus) or higher from an accredited college or university in a quantitative discipline (e.g., statistics, mathematics, operations research, engineering, data science or computer science).
Must have data modeling, predictive analytics and/or machine learning experience
5 years of related work experience in two or more of the following: designing/implementing machine learning, data mining, advanced analytical algorithms, advanced statistical analysis, artificial intelligence, or software engineering with data analysis software
MUST HAVE experience in Azure and Azure Data Lake Storage / ADLS
Hands-on work with Azure tools: Power BI, Azure Synapse and Azure Data Explorer, SQL. You should know how to build a report off Azure and link it to Power BI
This is not a developer position, but you must possess strong SQL coding skills
Experience in data mining and understanding of machine-learning and operations research is an advantage
Analytical mind and business acumen with a problem-solving aptitude and the communication skills to utilize these skills
Nice to have:
Experience in data mining and understanding of machine-learning and operations research is an advantage
Proficiency in Excel, PowerPoint, MS Access is a plus
Knowledge of using machine learning workflow/toolkits i.e., Kubeflow and analytics engine i.e. Spark
Any familiarity with other data analytics tools, data frameworks (e.g., Hadoop) is an asset
Knowledge of Python will be a plus
Would you like to know more about our new opportunity? For immediate consideration, please apply online while viewing all open jobs at *******************
.
ARC Group is a Forbes-ranked a top 20 recruiting and executive search firm working with clients nationwide to recruit the highest quality technical resources. We have achieved this by understanding both our candidate's and clients' needs and goals and serving both with integrity and a shared desire to succeed.
We are proud to be an equal-opportunity workplace dedicated to pursuing and hiring a diverse workforce.
Senior Data Scientist
Palo Alto, CA jobs
About the Hiring TeamLevel Infinite is Tencent's global gaming brand. It is a global game publisher offering a comprehensive network of services for games, development teams, and studios around the world. We are dedicated to delivering engaging and original gaming experiences to a worldwide audience, whenever and wherever they choose to play while building a community that fosters inclusivity, connection, and accessibility. Level Infinite also provides a wide range of services and resources to our network of developers and partner studios around the world to help them unlock the true potential of their games.What the Role Entails
Tencent Games was established in 2003. We are a leading global platform for game development, operations and publishing, and the largest online game community in China. Tencent Games has developed and operated over 140 games. We provide cross-platform interactive entertainment experience for more than 800 million users in over 200 countries and regions around the world. Honor of Kings, PUBG MOBILE, and League of Legends, are some of our most popular titles around the world. Meanwhile, we actively promote the development of esports industry, work with global partners to build an open, collaborative and symbiotic industrial ecology, and create high-quality digital life experiences for players.
Level Infinite is Tencent's global gaming brand. It is a global game publisher offering a comprehensive network of services for games, development teams, and studios around the world. We are dedicated to delivering engaging and original gaming experiences to a worldwide audience, whenever and wherever they choose to play while building a community that fosters inclusivity, connection, and accessibility. Level Infinite also provides a wide range of services and resources to our network of developers and partner studios around the world to help them unlock the true potential of their games.
About Us:
Tencent Games Global Data Insight team is a globally leading data science team serving game development, publishing, and operation. Our mission is to tackle various difficulties and challenges in the gaming industry through data science and technology.
Responsibilities:
As a Data Scientist at Tencent Games, you will be responsible for:
- Collecting, analyzing, and interpreting data to provide data-driven insights and solutions for business problems in the gaming industry, including various analysis and modeling scenarios in the stages of R&D, marketing, and operation.
- Developing computational workflows based on Python, SQL, and big data tools to handle and analyze large complex datasets; Utilizing BI tools to develop data reports and support data insights and business analysis.
- Modeling and developing scalable, efficient, automated data analysis, AB testing, machine learning, validation, and service frameworks.
- Collaborating with product managers and data engineers to design and implement end-to-end data solutions for issues in game development, publishing, and operation.
Why Work with Us:
At Tencent Games Global Data Insight Team, we provide unparalleled data-driven impact, resources, access to over a billion players, and an international perspective. We are dedicated to creating a friendly, cross-cultural environment where you can engage in groundbreaking work and take on new challenges that will create new adventures for billions of players. If you are ambitious, self-driven, and passionate about the gaming industry, we invite you to explore the opportunities with us.
Who We Look For
To be considered for this position, you should have:
- A master's degree or above in computer science, machine learning, or math/statistics.
- A passion for applying mathematical models and computer science technology to data-driven decision-making, along with profound insights into the gaming industry.
- Expertise in mathematical modeling, statistical analysis, and machine learning, including statistics, machine learning, AB testing, etc.
- Proficiency in programming languages such as Python, SQL, or related languages.
Location State(s)
US-California-Palo AltoThe expected base pay range for this position in the location(s) listed above is $118,100.00 to $274,600.00 per year. Actual pay may vary depending on job-related knowledge, skills, and experience. Employees hired for this position may be eligible for a sign on payment, relocation package, and restricted stock units, which will be evaluated on a case-by-case basis. Subject to the terms and conditions of the plans in effect, hired applicants are also eligible for medical, dental, vision, life and disability benefits, and participation in the Company's 401(k) plan. The Employee is also eligible for up to 15 to 25 days of vacation per year (depending on the employee's tenure), up to 13 days of holidays throughout the calendar year, and up to 10 days of paid sick leave per year. Your benefits may be adjusted to reflect your location, employment status, duration of employment with the company, and position level. Benefits may also be pro-rated for those who start working during the calendar year.Equal Employment Opportunity at Tencent
As an equal opportunity employer, we firmly believe that diverse voices fuel our innovation and allow us to better serve our users and the community. We foster an environment where every employee of Tencent feels supported and inspired to achieve individual and common goals.
Auto-ApplySenior Data Scientist, Sustainable Technology
OFallon, MO jobs
Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we're helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential.
Title and Summary
Senior Data Scientist, Sustainable Technology
Job Overview:
As the Senior Data Engineer / Scientist for Mastercard's Sustainable Technology, you will be the driving force behind the development and execution of cutting-edge data strategies and data environment frameworks. Your expertise will ensure the effective utilization of data, enabling the delivery of dependable Data & Analytics Services. You will collaborate with cross-functional teams and establish data-related best practices in alignment with Mastercard standards.
Responsibilities:
* Design, develop, and maintain new data capabilities and infrastructure for Mastercard's Sustainable Technology Internal Data Lake.
* Create new data pipelines, data transfers, and compliance-oriented infrastructure to facilitate seamless data utilization within on-premise/cloud environments.
* Identify existing data capability and infrastructure gaps or opportunities within and across initiatives and provide subject matter expertise in support of remediation.
* Collaborate with technical teams and business stakeholders to understand data requirements and translate them into technical solutions.
* Work with large datasets, ensuring data quality, accuracy, and performance.
* Implement data transformation, integration, and validation processes to support analytics/BI and reporting needs.
* Optimize and fine-tune data pipelines for improved speed, reliability, and efficiency.
* Implement best practices for data storage, retrieval, and archival to ensure data accessibility and security.
* Troubleshoot and resolve data-related issues, collaborating with the team to identify root causes.
* Document data processes, data lineage, and technical specifications for future reference.
* Participate in code reviews, ensuring adherence to coding standards and best practices.
* Collaborate with DevOps teams to automate deployment and monitoring of data pipelines.
* Additional tasks as required.
All About You
Education:
* Bachelor's degree in Computer Science, Engineering, Data Science, or a related field.
Knowledge / Experience:
* Proven experience as a Data Engineer / Scientist or similar role.
* Deep understanding of data visualization, statistics, hypothesis testing, business intelligence tools, SQL, data cleaning, and data lifecycle management.
* Proficiency in designing and implementing data tools, technologies, and processes.
* Expertise in data engineering, ETL/ELT processes, data warehousing, and data modeling.
* Strong command of data integration techniques and data quality management.
* Hands-on experience with data technologies such as Hadoop, Spark, Python, SQL, Alteryx, NiFi, SSIS, etc.
* Familiarity with cloud platforms and services, such as AWS, GCP, or Azure.
* Excellent problem-solving skills and ability to provide innovative data solutions.
* Strong leadership skills with a proven track record of guiding and mentoring a team.
* 3+ years of experience in related field.
* 3+ years of experience in delivering secure solutions in Financial Services Sector is preferred.
* Broad understanding of Software Engineering Concepts and Methodologies is required.
* Demonstrate MC Core Competencies.
Skills/ Abilities:
* Must be high-energy, detail-oriented, proactive and have the ability to function under pressure in an independent environment.
* Experience building data pipelines through Spark with Scala/Python/Java on Hadoop or Object storage.
* Expertise in Data Engineering and Data Analysis: implementing multiple end-to-end DW projects in Big Data Hadoop environment.
* Experience working with databases like MS SQL Server, Oracle, and strong SQL knowledge.
* Experience in BI tools like Tableau, Power BI.
* Experience with Alteryx, SSIS, NiFi, Spark, Cloudera Machine Learning, S3 Protocol, PowerBI, NoSQL data structures, Splunk, Databricks, (added advantage)
* Experience automating data flow processes in a Big Data environment.
* Pulling in data from various monitoring platforms to aggregate and enable data science work to support ESG (ie sustainable impact)
* Must provide the necessary skills to have a high degree of initiative and self-motivation to drive results.
* Possesses strong communication skills -- both verbal and written - and strong relationship, collaborative skills and organizational skills.
* Willingness and ability to learn and take on challenging opportunities and to work as a member of matrix based diverse and geographically distributed project team.
Mastercard is a merit-based, inclusive, equal opportunity employer that considers applicants without regard to gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law. We hire the most qualified candidate for the role. In the US or Canada, if you require accommodations or assistance to complete the online application process or during the recruitment process, please contact reasonable_accommodation@mastercard.com and identify the type of accommodation or assistance you are requesting. Do not include any medical or health information in this email. The Reasonable Accommodations team will respond to your email promptly.
Corporate Security Responsibility
All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must:
* Abide by Mastercard's security policies and practices;
* Ensure the confidentiality and integrity of the information being accessed;
* Report any suspected information security violation or breach, and
* Complete all periodic mandatory security trainings in accordance with Mastercard's guidelines.
In line with Mastercard's total compensation philosophy and assuming that the job will be performed in the US, the successful candidate will be offered a competitive base salary and may be eligible for an annual bonus or commissions depending on the role. The base salary offered may vary depending on multiple factors, including but not limited to location, job-related knowledge, skills, and experience. Mastercard benefits for full time (and certain part time) employees generally include: insurance (including medical, prescription drug, dental, vision, disability, life insurance); flexible spending account and health savings account; paid leaves (including 16 weeks of new parent leave and up to 20 days of bereavement leave); 80 hours of Paid Sick and Safe Time, 25 days of vacation time and 5 personal days, pro-rated based on date of hire; 10 annual paid U.S. observed holidays; 401k with a best-in-class company match; deferred compensation for eligible roles; fitness reimbursement or on-site fitness facilities; eligibility for tuition reimbursement; and many more. Mastercard benefits for interns generally include: 56 hours of Paid Sick and Safe Time; jury duty leave; and on-site fitness facilities in some locations.
Pay Ranges
O'Fallon, Missouri: $115,000 - $184,000 USD
Auto-ApplySenior Data Scientist, Sustainable Technology
OFallon, MO jobs
Our Purpose
Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we're helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential.
Title and Summary
Senior Data Scientist, Sustainable TechnologyJob Overview:
As the Senior Data Engineer / Scientist for Mastercard's Sustainable Technology, you will be the driving force behind the development and execution of cutting-edge data strategies and data environment frameworks. Your expertise will ensure the effective utilization of data, enabling the delivery of dependable Data & Analytics Services. You will collaborate with cross-functional teams and establish data-related best practices in alignment with Mastercard standards.
Responsibilities:
• Design, develop, and maintain new data capabilities and infrastructure for Mastercard's Sustainable Technology Internal Data Lake.
• Create new data pipelines, data transfers, and compliance-oriented infrastructure to facilitate seamless data utilization within on-premise/cloud environments.
• Identify existing data capability and infrastructure gaps or opportunities within and across initiatives and provide subject matter expertise in support of remediation.
• Collaborate with technical teams and business stakeholders to understand data requirements and translate them into technical solutions.
• Work with large datasets, ensuring data quality, accuracy, and performance.
• Implement data transformation, integration, and validation processes to support analytics/BI and reporting needs.
• Optimize and fine-tune data pipelines for improved speed, reliability, and efficiency.
• Implement best practices for data storage, retrieval, and archival to ensure data accessibility and security.
• Troubleshoot and resolve data-related issues, collaborating with the team to identify root causes.
• Document data processes, data lineage, and technical specifications for future reference.
• Participate in code reviews, ensuring adherence to coding standards and best practices.
• Collaborate with DevOps teams to automate deployment and monitoring of data pipelines.
• Additional tasks as required.
All About You
Education:
• Bachelor's degree in Computer Science, Engineering, Data Science, or a related field.
Knowledge / Experience:
• Proven experience as a Data Engineer / Scientist or similar role.
• Deep understanding of data visualization, statistics, hypothesis testing, business intelligence tools, SQL, data cleaning, and data lifecycle management.
• Proficiency in designing and implementing data tools, technologies, and processes.
• Expertise in data engineering, ETL/ELT processes, data warehousing, and data modeling.
• Strong command of data integration techniques and data quality management.
• Hands-on experience with data technologies such as Hadoop, Spark, Python, SQL, Alteryx, NiFi, SSIS, etc.
• Familiarity with cloud platforms and services, such as AWS, GCP, or Azure.
• Excellent problem-solving skills and ability to provide innovative data solutions.
• Strong leadership skills with a proven track record of guiding and mentoring a team.
• 3+ years of experience in related field.
• 3+ years of experience in delivering secure solutions in Financial Services Sector is preferred.
• Broad understanding of Software Engineering Concepts and Methodologies is required.
• Demonstrate MC Core Competencies.
Skills/ Abilities:
• Must be high-energy, detail-oriented, proactive and have the ability to function under pressure in an independent environment.
• Experience building data pipelines through Spark with Scala/Python/Java on Hadoop or Object storage.
• Expertise in Data Engineering and Data Analysis: implementing multiple end-to-end DW projects in Big Data Hadoop environment.
• Experience working with databases like MS SQL Server, Oracle, and strong SQL knowledge.
• Experience in BI tools like Tableau, Power BI.
• Experience with Alteryx, SSIS, NiFi, Spark, Cloudera Machine Learning, S3 Protocol, PowerBI, NoSQL data structures, Splunk, Databricks, (added advantage)
• Experience automating data flow processes in a Big Data environment.
• Pulling in data from various monitoring platforms to aggregate and enable data science work to support ESG (ie sustainable impact)
• Must provide the necessary skills to have a high degree of initiative and self-motivation to drive results.
• Possesses strong communication skills -- both verbal and written - and strong relationship, collaborative skills and organizational skills.
• Willingness and ability to learn and take on challenging opportunities and to work as a member of matrix based diverse and geographically distributed project team.Mastercard is a merit-based, inclusive, equal opportunity employer that considers applicants without regard to gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law. We hire the most qualified candidate for the role. In the US or Canada, if you require accommodations or assistance to complete the online application process or during the recruitment process, please contact reasonable_accommodation@mastercard.com and identify the type of accommodation or assistance you are requesting. Do not include any medical or health information in this email. The Reasonable Accommodations team will respond to your email promptly.
Corporate Security Responsibility
All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must:
Abide by Mastercard's security policies and practices;
Ensure the confidentiality and integrity of the information being accessed;
Report any suspected information security violation or breach, and
Complete all periodic mandatory security trainings in accordance with Mastercard's guidelines.
In line with Mastercard's total compensation philosophy and assuming that the job will be performed in the US, the successful candidate will be offered a competitive base salary and may be eligible for an annual bonus or commissions depending on the role. The base salary offered may vary depending on multiple factors, including but not limited to location, job-related knowledge, skills, and experience. Mastercard benefits for full time (and certain part time) employees generally include: insurance (including medical, prescription drug, dental, vision, disability, life insurance); flexible spending account and health savings account; paid leaves (including 16 weeks of new parent leave and up to 20 days of bereavement leave); 80 hours of Paid Sick and Safe Time, 25 days of vacation time and 5 personal days, pro-rated based on date of hire; 10 annual paid U.S. observed holidays; 401k with a best-in-class company match; deferred compensation for eligible roles; fitness reimbursement or on-site fitness facilities; eligibility for tuition reimbursement; and many more. Mastercard benefits for interns generally include: 56 hours of Paid Sick and Safe Time; jury duty leave; and on-site fitness facilities in some locations.
Pay Ranges
O'Fallon, Missouri: $115,000 - $184,000 USD
Auto-ApplySenior Data Scientist, Sustainable Technology
OFallon, MO jobs
**Our Purpose** _Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we're helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential._
**Title and Summary**
Senior Data Scientist, Sustainable Technology
Job Overview:
As the Senior Data Engineer / Scientist for Mastercard's Sustainable Technology, you will be the driving force behind the development and execution of cutting-edge data strategies and data environment frameworks. Your expertise will ensure the effective utilization of data, enabling the delivery of dependable Data & Analytics Services. You will collaborate with cross-functional teams and establish data-related best practices in alignment with Mastercard standards.
Responsibilities:
- Design, develop, and maintain new data capabilities and infrastructure for Mastercard's Sustainable Technology Internal Data Lake.
- Create new data pipelines, data transfers, and compliance-oriented infrastructure to facilitate seamless data utilization within on-premise/cloud environments.
- Identify existing data capability and infrastructure gaps or opportunities within and across initiatives and provide subject matter expertise in support of remediation.
- Collaborate with technical teams and business stakeholders to understand data requirements and translate them into technical solutions.
- Work with large datasets, ensuring data quality, accuracy, and performance.
- Implement data transformation, integration, and validation processes to support analytics/BI and reporting needs.
- Optimize and fine-tune data pipelines for improved speed, reliability, and efficiency.
- Implement best practices for data storage, retrieval, and archival to ensure data accessibility and security.
- Troubleshoot and resolve data-related issues, collaborating with the team to identify root causes.
- Document data processes, data lineage, and technical specifications for future reference.
- Participate in code reviews, ensuring adherence to coding standards and best practices.
- Collaborate with DevOps teams to automate deployment and monitoring of data pipelines.
- Additional tasks as required.
All About You
Education:
- Bachelor's degree in Computer Science, Engineering, Data Science, or a related field.
Knowledge / Experience:
- Proven experience as a Data Engineer / Scientist or similar role.
- Deep understanding of data visualization, statistics, hypothesis testing, business intelligence tools, SQL, data cleaning, and data lifecycle management.
- Proficiency in designing and implementing data tools, technologies, and processes.
- Expertise in data engineering, ETL/ELT processes, data warehousing, and data modeling.
- Strong command of data integration techniques and data quality management.
- Hands-on experience with data technologies such as Hadoop, Spark, Python, SQL, Alteryx, NiFi, SSIS, etc.
- Familiarity with cloud platforms and services, such as AWS, GCP, or Azure.
- Excellent problem-solving skills and ability to provide innovative data solutions.
- Strong leadership skills with a proven track record of guiding and mentoring a team.
- 3+ years of experience in related field.
- 3+ years of experience in delivering secure solutions in Financial Services Sector is preferred.
- Broad understanding of Software Engineering Concepts and Methodologies is required.
- Demonstrate MC Core Competencies.
Skills/ Abilities:
- Must be high-energy, detail-oriented, proactive and have the ability to function under pressure in an independent environment.
- Experience building data pipelines through Spark with Scala/Python/Java on Hadoop or Object storage.
- Expertise in Data Engineering and Data Analysis: implementing multiple end-to-end DW projects in Big Data Hadoop environment.
- Experience working with databases like MS SQL Server, Oracle, and strong SQL knowledge.
- Experience in BI tools like Tableau, Power BI.
- Experience with Alteryx, SSIS, NiFi, Spark, Cloudera Machine Learning, S3 Protocol, PowerBI, NoSQL data structures, Splunk, Databricks, (added advantage)
- Experience automating data flow processes in a Big Data environment.
- Pulling in data from various monitoring platforms to aggregate and enable data science work to support ESG (ie sustainable impact)
- Must provide the necessary skills to have a high degree of initiative and self-motivation to drive results.
- Possesses strong communication skills -- both verbal and written - and strong relationship, collaborative skills and organizational skills.
- Willingness and ability to learn and take on challenging opportunities and to work as a member of matrix based diverse and geographically distributed project team.
Mastercard is a merit-based, inclusive, equal opportunity employer that considers applicants without regard to gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law. We hire the most qualified candidate for the role. In the US or Canada, if you require accommodations or assistance to complete the online application process or during the recruitment process, please contact reasonable_accommodation@mastercard.com and identify the type of accommodation or assistance you are requesting. Do not include any medical or health information in this email. The Reasonable Accommodations team will respond to your email promptly.
**Corporate Security Responsibility**
All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must:
+ Abide by Mastercard's security policies and practices;
+ Ensure the confidentiality and integrity of the information being accessed;
+ Report any suspected information security violation or breach, and
+ Complete all periodic mandatory security trainings in accordance with Mastercard's guidelines.
In line with Mastercard's total compensation philosophy and assuming that the job will be performed in the US, the successful candidate will be offered a competitive base salary and may be eligible for an annual bonus or commissions depending on the role. The base salary offered may vary depending on multiple factors, including but not limited to location, job-related knowledge, skills, and experience. Mastercard benefits for full time (and certain part time) employees generally include: insurance (including medical, prescription drug, dental, vision, disability, life insurance); flexible spending account and health savings account; paid leaves (including 16 weeks of new parent leave and up to 20 days of bereavement leave); 80 hours of Paid Sick and Safe Time, 25 days of vacation time and 5 personal days, pro-rated based on date of hire; 10 annual paid U.S. observed holidays; 401k with a best-in-class company match; deferred compensation for eligible roles; fitness reimbursement or on-site fitness facilities; eligibility for tuition reimbursement; and many more. Mastercard benefits for interns generally include: 56 hours of Paid Sick and Safe Time; jury duty leave; and on-site fitness facilities in some locations.
**Pay Ranges**
O'Fallon, Missouri: $115,000 - $184,000 USD
Senior Data Scientist
Vienna, VA jobs
Provide independent data science, machine learning, and analytical insights using member, financial, and organizational data to support mission critical decision making for various areas of the organization. Understand business needs and identify opportunities for new products, services, and process optimization to meet business objectives through the use of cutting-edge data science. Create descriptive, predictive, and prescriptive models and insights to drive impact across the organization. Conduct work assignments of increasing complexity, under moderate supervision with some latitude for independent judgment. Intermediate professional within field; requires moderate skill set and proficiency in discipline.
3-5 years of experience in exploratory data analysis
Basic understanding of business and operating environment
Statistics
Programming, data modeling, simulation, and advanced mathematics
SQL, R, Python, Hadoop, SAS, SPSS, Scala, AWS
Model lifecycle execution
Technical writing
Data storytelling and technical presentation skills
Research Skills
Interpersonal Skills
Working knowledge of procedures, instructions, and validation techniques
Model Development
Communication
Critical Thinking
Collaborate and Build Relationships
Initiative with sound judgement
Technical (Big Data Analysis, Coding, Project Management, Technical Writing, etc.)
Sound Judgment
Problem Solving (Responds as problems and issues are identified)
Bachelor's Degree in Data Science, Statistics, Mathematics, Computers Science, Engineering, or degrees in similar quantitative fields
Desired Qualification(s)
Master's/PhD Degree in Data Science, Statistics, Mathematics, Computers Science, or Engineering
4-5 years of experience in exploratory data analysis
Hours: Monday - Friday, 8:00AM - 4:30PM
Location: 820 Follin Lane, Vienna, VA 22180
Design, develop, and evaluate moderately complex predictive models and advanced algorithms
Identifies meaningful insights from large data and metadata sources
Test hypotheses/models, analyze, and interpret results
Exercise sound judgment and discretion within defined procedures and practices
Develop and code moderately complex software programs, algorithms, and automated processes
Use modeling and trend analysis to analyze data and provide insights
Develop understanding of best practices and ethical AI
Transform data into charts, tables, or format that aids effective decision making
Build working relationships with team members and subject matter experts
Lead small projects and initiatives
Utilize effective written and verbal communication to document and present findings of analyses to a diverse audience of stakeholders
Auto-Apply