Machine Learning Data Scientist
Data engineer job in Pittsburgh, PA
Machine Learning Data Scientist
Length: 6 Month Contract to Start
* Please no agencies. Direct employees currently authorized to work in the United States - no sponsorship available.*
Job Description:
We are looking for a Data Scientist/Engineer with Machine Learning and strong skills in Python, time-series modeling, and SCADA/industrial data. In this role, you will build and deploy ML models for forecasting, anomaly detection, and predictive maintenance using high-frequency sensor and operational data.
Essential Duties and Responsibilities:
Develop ML models for time-series forecasting and anomaly detection
Build data pipelines for SCADA/IIoT data ingestion and processing
Perform feature engineering and signal analysis on time-series data
Deploy models in production using APIs, microservices, and MLOps best practices
Collaborate with data engineers and domain experts to improve data quality and model performance
Qualifications:
Strong Python skills
Experience working with SCADA systems or industrial data historians
Solid understanding of time-series analytics and signal processing
Experience with cloud platforms and containerization (AWS/Azure/GCP, Docker)
POST-OFFER BACKGROUND CHECK IS REQUIRED. Digital Prospectors is an Equal Opportunity Employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, veteran status, or any other characteristic protected by law. Digital Prospectors affirms the right of all individuals to equal opportunity and prohibits any form of discrimination or harassment.
Come see why DPC has achieved:
4.9/5 Star Glassdoor rating and the only staffing company (< 1000 employees) to be voted in the national Top 10 ‘Employee's Choice - Best Places to Work' by Glassdoor.
Voted ‘Best Staffing Firm to Temp/Contract For' seven times by Staffing Industry Analysts as well as a ‘Best Company to Work For' by Forbes, Fortune and Inc. magazine.
As you are applying, please join us in fostering diversity, equity, and inclusion by completing the Invitation to Self-Identify form today!
*******************
Job #18135
Data Engineer (IoT)
Data engineer job in Pittsburgh, PA
As an IoT Data Engineer at CurvePoint, you will design, build, and optimize the data pipelines that power our Wi-AI sensing platform. Your work will focus on reliable, low-latency data acquisition from constrained on-prem IoT devices, efficient buffering and streaming, and scalable cloud-based storage and training workflows.
You will own how raw sensor data (e.g., wireless CSI, video, metadata) moves from edge devices with limited disk and compute into durable, well-structured datasets used for model training, evaluation, and auditability. You will work closely with hardware, ML, and infrastructure teams to ensure our data systems are fast, resilient, and cost-efficient at scale.
Duties and Responsibilities
Edge & On-Prem Data Acquisition
Design and improve data capture pipelines on constrained IoT devices and host servers (limited disk, intermittent connectivity, real-time constraints).
Implement buffering, compression, batching, and backpressure strategies to prevent data loss.
Optimize data transfer from edge → on-prem host → cloud.
Streaming & Ingestion Pipelines
Build and maintain streaming or near-real-time ingestion pipelines for sensor data (e.g., CSI, video, logs, metadata).
Ensure data integrity, ordering, and recoverability across failures.
Design mechanisms for replay, partial re-ingestion, and audit trails.
Cloud Data Pipelines & Storage
Own cloud-side ingestion, storage layout, and lifecycle policies for large time-series datasets.
Balance cost, durability, and performance across hot, warm, and cold storage tiers.
Implement data versioning and dataset lineage to support model training and reproducibility.
Training Data Enablement
Structure datasets to support efficient downstream ML training, evaluation, and experimentation.
Work closely with ML engineers to align data formats, schemas, and sampling strategies with training needs.
Build tooling for dataset slicing, filtering, and validation.
Reliability & Observability
Add monitoring, metrics, and alerts around data freshness, drop rates, and pipeline health.
Debug pipeline failures across edge, on-prem, and cloud environments.
Continuously improve system robustness under real-world operating conditions.
Cross-Functional Collaboration
Partner with hardware engineers to understand sensor behavior and constraints.
Collaborate with ML engineers to adapt pipelines as model and data requirements evolve.
Contribute to architectural decisions as the platform scales from pilots to production deployments.
Must Haves
Bachelor's degree in Computer Science, Electrical Engineering, or a related field (or equivalent experience).
3+ years of experience as a Data Engineer or Backend Engineer working with production data pipelines.
Strong Python skills; experience building reliable data processing systems.
Hands-on experience with streaming or near-real-time data ingestion (e.g., Kafka, Kinesis, MQTT, custom TCP/UDP pipelines).
Experience working with on-prem systems or edge/IoT devices, including disk, bandwidth, or compute constraints.
Familiarity with cloud storage and data lifecycle management (e.g., S3-like object stores).
Strong debugging skills across distributed systems.
Nice to Have
Experience with IoT or sensor data (RF/CSI, video, audio, industrial telemetry).
Familiarity with data compression, time-series formats, or binary data handling.
Experience supporting ML training pipelines or large-scale dataset management.
Exposure to containerized or GPU-enabled data processing environments.
Knowledge of data governance, retention, or compliance requirements.
Location
Pittsburgh, PA (hybrid preferred; some on-site work with hardware teams)
Salary
$110,000 - $135,000 / year (depending on experience and depth in streaming + IoT systems)
Azure data engineer
Data engineer job in Pittsburgh, PA
Job Title - DataBricks Data Engineer
**Must have 8+ years of real hands on experience**
We are specifically seeking a Data Engineer-Lead with strong expertise in Databricks development.
The role involves:
Building and testing data pipelines using Python/Scala on Databricks
Hands on experience to develop and lead the offshore team to perform development/testing work in Azure data bricks
Architect data platforms using Azure services such as Azure Data Factory (ADF), Azure Databricks (ADB), Azure SQL Database, and PySpark.
Collaborate with stakeholders to understand business needs and translate them into technical solutions.
Provide technical leadership and guidance to the data engineering team and need to perform development.
Familiar with Safe Agile concepts and good to have working experience in agile model.
Develop and maintain data pipelines for efficient data movement and transformation.
Onsite and offshore team communication and co-ordination.
Create and update the documentation to facilitate cross-training and troubleshooting
Hands on experience in scheduling tools like BMC control-M and setup jobs and test the schedules.
Understand the data models and schemas to support the development work and help in creation of tables in databricks
Proficiency in Azure Data Factory (ADF), Azure Databricks (ADB), SQL, NoSQL, PySpark, Power BI and other Azure data tools.
Implementing automated data validation frameworks such as Great Expectations or Deequ
Reconciling large-scale datasets
Ensuring data reliability across both batch and streaming processes
The ideal candidate will have hands-on experience with:
PySpark, Scala, Delta Lake, and Unity Catalog
Devops CI/CD automation
Cloud-native data services
Azure databricks/Oracle
BMC Control-M
Location: Pittsburgh, PA
Hadoop Data Engineer
Data engineer job in Pittsburgh, PA
About the job:
We are seeking an accomplished Tech Lead - Data Engineer to architect and drive the development of large-scale, high-performance data platforms supporting critical customer and transaction-based systems. The ideal candidate will have a strong background in data pipeline design, Hadoop ecosystem, and real-time data processing, with proven experience building data solutions that power digital products and decisioning platforms in a complex, regulated environment.
As a technical leader, you will guide a team of engineers to deliver scalable, secure, and reliable data solutions enabling advanced analytics, operational efficiency, and intelligent customer experiences.
Key Roles & Responsibilities
Lead and oversee the end-to-end design, implementation, and optimization of data pipelines supporting key customer onboarding, transaction, and decisioning workflows.
Architect and implement data ingestion, transformation, and storage frameworks leveraging Hadoop, Avro, and distributed data processing technologies.
Partner with product, analytics, and technology teams to translate business requirements into scalable data engineering solutions that enhance real-time data accessibility and reliability.
Provide technical leadership and mentorship to a team of data engineers, ensuring adherence to coding, performance, and data quality standards.
Design and implement robust data frameworks to support next-generation customer and business product launches.
Develop best practices for data governance, security, and compliance aligned with enterprise and regulatory requirements.
Drive optimization of existing data pipelines and workflows for improved efficiency, scalability, and maintainability.
Collaborate closely with analytics and risk modeling teams to ensure data readiness for predictive insights and strategic decision-making.
Evaluate and integrate emerging data technologies to future-proof the data platform and enhance performance.
Must-Have Skills
8-10 years of experience in data engineering, with at least 2-3 years in a technical leadership role.
Strong expertise in the Hadoop ecosystem (HDFS, Hive, MapReduce, HBase, Pig, etc.).
Experience working with Avro, Parquet, or other serialization formats.
Proven ability to design and maintain ETL / ELT pipelines using tools such as Spark, Flink, Airflow, or NiFi.
Proficiency in Python, Scala for large-scale data processing.
Strong understanding of data modeling, data warehousing, and data lake architectures.
Hands-on experience with SQL and both relational and NoSQL data stores.
Cloud data platform experience with AWS.
Deep understanding of data security, compliance, and governance frameworks.
Excellent problem-solving, communication, and leadership skills.
Cloud Data Architect
Data engineer job in Pittsburgh, PA
Duquesne Light Company, headquartered in downtown Pittsburgh, is a leader in providing electric energy and has been in the forefront of the electric energy market, with a history rooted in technological innovation and superior customer service. Today, the company continues its role as a leader in the transmission and distribution of electric energy, providing a secure supply of reliable power to more than half a million customers in southwestern Pennsylvania.
Duquesne Light Company is committed to creating a culture of inclusion. We value and respect the unique differences and experiences of our employees. We believe that our differences lead to better collaboration, innovation and outcomes. We want you to join our team!
The role of Cloud Data Architect I is to expand the company's use of data as a strategic enabler of corporate goals and objectives. The Cloud Data Architect will achieve this by strategically designing, developing, and implementing data models for data stored in enterprise systems/platforms or curated from 3rd parties and provide accessible, availability for business consumers to analyze and gain valuable insights. This individual will act as the primary advocate of data modeling best practices and lead innovation in adopting and leveraging cloud data technologies to accelerate adoption. Overtime & on-call availability as required.
Location: Hybrid (two days per week in office), downtown Pittsburgh, Pennsylvania
Responsibilities:
Strategy & Planning
Develop and deliver long-term strategic goals for data architecture vision and standards with data users, department managers, business partners, and other key stakeholders.
Create short-term tactical solutions to achieve long-term objectives and an overall data management roadmap.
Collaborate with third parties and business subject matter experts to enable company strategic imperatives that require secure and accessible data cloud capabilities or assist in curating data required to enrich models.
Collaborate with IT Enterprise Architecture, Enterprise Platforms, Business Intelligence, Data Engineering, Data Quality and Data Governance stakeholders to:
Develop processes for governing the identification, collection, and use of corporate metadata;
Take steps to assure metadata accuracy and validity.
Track data quality, completeness, redundancy, and improvement.
Conduct cloud consumption cost forecasting, usage requirements, proof of concepts, proof of business value, feasibility studies, and other tasks.
Create strategies and plans for data security, backup, disaster recovery, business continuity, and archiving.
Ensure that data strategies and architectures are in regulatory compliance.
Acquisition & Deployment
Liaise with vendors, cloud providers and service providers to select the solutions or services that best meet company goals.
Operational Management
Develop and promote data management methodologies and standards.
Select and implement the appropriate tools, software, applications, and systems to support data technology goals.
Oversee the mapping of data sources, data movement, interfaces, and analytics, with the goal of ensuring data quality.
Collaborate with scrum masters, project managers, and business unit leaders for all projects involving enterprise data.
Address data-related problems in regard to systems integration, compatibility, and multiple-platform integration.
Act as a leader and advocate of data management, including coaching, training, and career development to staff.
Develop and implement key components as needed to create testing criteria to guarantee data architecture's fidelity and performance.
Document the data architecture and environment to maintain a current and accurate view of the larger data picture.
Identify and develop opportunities for data reuse, migration, or retirement.
Act as a technical leader to the organization for Data Architecture and Data Engineering best practices.
Education/Experience:
Bachelor's degree in computer science, information systems, computer engineering, or relevant discipline
Twelve (12) or more years of experience is required.
Advanced degrees in a related field preferred.
Relevant professional certifications preferred.
Knowledge & Experience
Utilities experience (oil, gas, and/or electric) strongly preferred.
Five (5) or more-years' work experience as a data or information architect.
Hands-on experience with data architecting, data mining, large-scale data modeling, cloud data storage and analytics platforms.
Experience with Azure, AWS or Google data capabilities.
Experience with Databricks, Synapse, Foundry, Power BI, and/or Snowflake preferred.
Experience with Enterprise platforms such as Oracle, Maximo etc preferred
Direct experience in implementing data solutions in the cloud.
Strong understanding of relational data structures, theories, principles, and practices.
Strong familiarity with metadata management and associated processes.
Hands-on knowledge of enterprise repository tools, data modeling tools, data mapping tools, and data profiling tools.
Demonstrated expertise with repository creation, and data and information system life cycle methodologies.
Experience with business requirements analysis, entity relationship planning, database design, reporting structures, and so on.
Ability to manage data and metadata migration.
Experience with database platforms, including Oracle RDBMS.
Experience with GIS Geo-Spatial Data, time-phased PI Historian or collector data preferred.
Understanding of Web services (SOAP, XML, UDDI, WSDL).
Object-oriented programming experience (e.g. using Java, J2EE, EJB, .NET, WebSphere, etc.).
Excellent client/user interaction skills to determine requirements.
Proven project management experience.
Good knowledge of applicable data privacy practices and laws.
Storm Roles
All Non-Union Employees will serve in storm roles as appropriate to their role and skillset.
EQUAL OPPORTUNITY EMPLOYER
Duquesne Light Holdings is committed to providing equal employment opportunity to all people in all aspects of the employment relationship, without discrimination because of race, age, sex, color, religion, national origin, disability, sexual orientation and gender identity or status as a Vietnam era or special disabled veteran or any other unlawful basis, as defined by applicable law, and fostering a workplace free of unlawful discrimination and retaliation. This policy affects decisions including, but not limited to, hiring, compensation, benefits, terms and conditions of employment, opportunities for promotion, transfer, layoffs, return from a layoff, training and development, and other privileges of employment. An integral part of Duquesne Light Holdings' commitment is to comply with all applicable federal, state and local laws concerning equal employment and affirmative action.
Duquesne Light Holdings is committed to offering an inclusive and accessible experience for all job seekers, including individuals with disabilities. Our goal is to foster an inclusive and accessible workplace where everyone has the opportunity to be successful.
If you need a reasonable accommodation to search for a job opening, apply for a position, or participate in the interview process, connect with us at *************** and describe the specific accommodation requested for a disability-related limitation.
Software Engineer - Test Systems Developer
Data engineer job in Canonsburg, PA
Job Title: Software Engineer - Test Systems Developer***
Education & Experience:
Requires a Bachelor's degree in Software Engineering, or a related Science, Engineering or Mathematics field. Also requires 2+ years of job-related experience or a Master's degree. Agile experience preferred.
CLEARANCE REQUIREMENTS: Secret
Qualifications:
As a Software Engineer - Test Systems Developer (Sr Software Engineer) for the Torpedo Systems Group you will be a member of a cross functional team responsible for sustaining and creating software for embedded applications. You will participate in all phases of the Software Development Life Cycle (SDLC) including requirements analysis, design, implementation, and testing.
We encourage you to apply if you have any of these preferred skills or experiences:
C/C++
LabWindows/CVI
Object Oriented Development.
Windows/Visual Studio
SQL/SQL Server or like relational database experience.
Comfortable in implementing ideas from scratch, owning major application features, and take responsibility for their maintenance and improvement over time.
Experience participating in technical architecture decisions for complex products.
A significant level of Windows application development architecture expertise (e.g., Win32 apps, WPF apps, WinUI 3 apps).
Deep understanding of software design patterns such as MVVM, MVP, etc.
Experience with Windows kernel level debugging and diagnostics using tools such as Windows DDK or WinDBG or equivalent.
Demonstrated in-depth experience developing, testing and debugging software for Windows OS using Visual Studio IDE and Windows SDK.
Demonstrated in-depth understand of Windows Low Level Systems development and API.
Experience with DevOps concepts such as:
Implementing Version Control and standing up branching strategies.
Automating processes for build, test, and deploy.
Applied experience with agile/lean principles in software development.
What sets you apart:
Welcoming contribution to build a strong collaborative team culture.
Strong understanding of software development process, as well as software engineering concepts, principles, and theories
Creative thinker capable of applying new information quickly to solve challenging problems
Comfortable providing technical leadership
Team player who thrives in collaborative environments and revels in team success
Commitment to ongoing professional development for yourself and others
Senior Java Software Engineer
Data engineer job in Pittsburgh, PA
Qualifications
Bachelor's degree in Computer Science (or related field)
Full Stack Java Developer
8+ years of relevant work experience with Java , J2ee, RESTful APIs
Experience in Kubernetes (or AWS) and DevOps.
Expertise in Object Oriented Design, Database Design, and XML Schema
Deploy, monitor, and manage applications on Kubernetes or AWS cloud environments.
Experience with Agile or Scrum software development methodologies
Ability to multi-task, organize, and prioritize work
Java Software Engineer
Data engineer job in Pittsburgh, PA
About Us:
LTIMindtree is a global technology consulting and digital solutions company that enables enterprises across industries to reimagine business models, accelerate innovation, and maximize growth by harnessing digital technologies. As a digital transformation partner to more than 700+ clients, LTIMindtree brings extensive domain and technology expertise to help drive superior competitive differentiation, customer experiences, and business outcomes in a converging world. Powered by nearly 90,000 talented and entrepreneurial professionals across more than 30 countries, LTIMindtree a Larsen & Toubro Group company combines the industry-acclaimed strengths of erstwhile Larsen and Toubro Infotech and Mindtree in solving the most complex business challenges and delivering transformation at scale. For more information, please visit. ********************************
Job Title: Java Developer
Location: Pittsburgh, PA (4 days onsite/week)
Duration: FTE
Job description:
8 to 10 Years of experience
Strong knowledge of Java and FrontEnd UI Technologies
Experience of working in UI tool sets programming languages Core JavaScript Angular 11 or higher JavaScript frameworks CSS HTML
Experience in Spring Framework Hibernate and proficiency with Spring Boot
Solid coding and troubleshooting experience on Web Services and RESTful API
Experience and understanding of design patterns culminating into microservices development
Strong SQL skills to work on relational databases
Strong experience in SDLC DevOps processes CICD tools Git etc
Strong problem solver with ability to manage and lead the team to push the solution
Strong Communication Skills
Benefits/perks listed below may vary depending on the nature of your employment with LTIMindtree (“LTIM”):
Benefits and Perks:
Comprehensive Medical Plan Covering Medical, Dental, Vision
Short Term and Long-Term Disability Coverage
401(k) Plan with Company match
Life Insurance
Vacation Time, Sick Leave, Paid Holidays
Paid Paternity and Maternity Leave
The range displayed on each job posting reflects the minimum and maximum salary target for the position across all US locations. Within the range, individual pay is determined by work location and job level and additional factors including job-related skills, experience, and relevant education or training. Depending on the position offered, other forms of compensation may be provided as part of overall compensation like an annual performance-based bonus, sales incentive pay and other forms of bonus or variable compensation.
Disclaimer: The compensation and benefits information provided herein is accurate as of the date of this posting.
LTIMindtree is an equal opportunity employer that is committed to diversity in the workplace. Our employment decisions are made without regard to race, color, creed, religion, sex (including pregnancy, childbirth or related medical conditions), gender identity or expression, national origin, ancestry, age, family-care status, veteran status, marital status, civil union status, domestic partnership status, military service, handicap or disability or history of handicap or disability, genetic information, atypical hereditary cellular or blood trait, union affiliation, affectional or sexual orientation or preference, or any other characteristic protected by applicable federal, state, or local law, except where such considerations are bona fide occupational qualifications permitted by law.
Azure DevOps Engineer with P&C exp.
Data engineer job in Pittsburgh, PA
Responsibilities
Following are the day-to-day work activities:
CI/CD Pipeline Management: Design, implement, and maintain Continuous Integration/Continuous Deployment (CI/CD) pipelines for Guidewire applications using tools like TeamCity, GitLab CI, and others.
Infrastructure Automation: Automate infrastructure provisioning and configuration management using tools such as Terraform, Ansible, or CloudFormation.
Monitoring and Logging: Implement and manage monitoring and logging solutions to ensure system reliability, performance, and security.
Collaboration: Work closely with development, QA, and operations teams to streamline processes and improve efficiency.
Security: Enhance the security of the IT infrastructure and ensure compliance with industry standards and best practices.
Troubleshooting: Identify and resolve infrastructure and application issues, ensuring minimal downtime and optimal performance.
Documentation: Maintain comprehensive documentation of infrastructure configurations, processes, and procedures.
Requirements
Candidates are required to have these mandatory skills to get the eligibility of their profile assessed. The must have requirements are:
Educational Background: Bachelor's degree in Computer Science, Information Technology, or a related field.
Experience:
6-10 years of experience in a DevOps or systems engineering role.
Hands-on experience with cloud platforms (AWS, Azure, GCP).
Technical Skills:
Proficiency in scripting languages (e.g., Python, Power Shell). (2-3 years)
Experience with CI/CD tools (e.g., Jenkins, GitLab CI). (3-5 yrs)
Knowledge of containerization technologies (e.g., Docker, Kubernetes).- good to have.
Strong understanding of networking, security, and system administration. ((3-5 yrs)
Familiarity with monitoring toolssuch as DynaTrace/Datadog / Splunk
Familiarity with Agile developmentmethodologies.
Soft Skills:
Excellent problem-solving and analytical skills.
Strong communication and teamwork abilities.
Ability to work independently
About ValueMomentum
ValueMomentum is a leading solutions provider for the global property & casualty insurance industry, supported by deep domain and technology capabilities. We offer a comprehensive suite of advisory, development, implementation, and maintenance services across the entire P&C insurance value chain. This includes Underwriting, Claims, Distribution, and more, empowering insurers to stay ahead with sustained growth, high performance, and enhanced stakeholder value. Trusted by over 75 insurers, ValueMomentum is one of the largest standalone insurance-focused solutions providers to the US insurance industry.
Data Scientist, United States - BCG X
Data engineer job in Pittsburgh, PA
Locations: Boston | Chicago | Pittsburgh | New York | Brooklyn | Miami | Dallas | San Francisco | Seattle | Los Angeles | Manhattan Beach Who We Are Boston Consulting Group partners with leaders in business and society to tackle their most important challenges and capture their greatest opportunities. BCG was the pioneer in business strategy when it was founded in 1963. Today, we help clients with total transformation-inspiring complex change, enabling organizations to grow, building competitive advantage, and driving bottom-line impact.
To succeed, organizations must blend digital and human capabilities. Our diverse, global teams bring deep industry and functional expertise and a range of perspectives to spark change. BCG delivers solutions through leading-edge management consulting along with technology and design, corporate and digital ventures-and business purpose. We work in a uniquely collaborative model across the firm and throughout all levels of the client organization, generating results that allow our clients to thrive.
We Are BCG X
We're a diverse team of more than 3,000 tech experts united by a drive to make a difference. Working across industries and disciplines, we combine our experience and expertise to tackle the biggest challenges faced by society today. We go beyond what was once thought possible, creating new and innovative solutions to the world's most complex problems. Leveraging BCG's global network and partnerships with leading organizations, BCG X provides a stable ecosystem for talent to build game-changing businesses, products, and services from the ground up, all while growing their career. Together, we strive to create solutions that will positively impact the lives of millions.
What You'll Do
Our BCG X teams own the full analytics value-chain end to end: framing new business challenges, designing innovative algorithms, implementing, and deploying scalable solutions, and enabling colleagues and clients to fully embrace AI. Our product offerings span from fully custom-builds to industry specific leading edge AI software solutions.
Our Data Scientists are part of our rapidly growing team to apply data science methods and analytics to real-world business situations across industries to drive significant business impact. You'll have the chance to partner with clients in a variety of BCG regions and industries, and on key topics like climate change, enabling them to design, build, and deploy new and innovative solutions.
Additional responsibilities will include developing and delivering thought leadership in scientific communities and papers as well as leading conferences on behalf of BCG X. Successful candidates are intellectually curious builders who are biased toward action, creative, and communicative.
What You'll Bring
We are looking for dedicated individuals with a passion for data science, statistics, operations research and redefining organizations into AI led innovative companies. Successful candidates possess the following:
* Comfortable in a client-facing role with the ambition to lead teams
* Likes to distill complex results or processes into simple, clear visualizations
* Explain sophisticated data science concepts in an understandable manner
* Love building things and are comfortable working with modern development tools and writing code collaboratively (bonus points if you have a software development or DevOps experience)
* Significant experience applying advanced analytics to a variety of business situations and a proven ability to synthesize complex data
* Deep understanding of modern machine learning techniques and their mathematical underpinnings, and can translate this into business implications for our clients
* Have strong project management skills
Please note: any degree programs (including part-time) must be completed before starting at BCG.
TECHNOLOGIES:
Programming Languages: Python
Additional info
You must live within a reasonable commuting distance of your home office. As a member of that office, it is expected you will be in the office as directed. This role puts you on an accelerated path of personal and professional growth and development and so, at times, requires extended working hours. Our work often requires travel to client sites.
FOR U.S. APPLICANTS: BCG is an Equal Employment Opportunity employer and is committed to a policy of administering all employment decisions and actions without regard to race, national origin, religion, age, color, sex, sexual orientation, gender identity, disability, or protected veteran status, or any other characteristic protected by local, state, or federal laws, rules, or regulations.
The first-year base compensation for this role is:
Data Scientist I: $110,000 USD
Data Scientist II: $145,000 USD
Data Scientist III: $160,000 USD
At BCG, we are committed to offering a comprehensive benefit program that includes everything our employees and their families need to be well and live life to the fullest. We pay the full cost of medical, dental, and vision coverage for employees - and their eligible family members. * That's zero dollars in premiums taken from employee paychecks. All our plans provide best in class coverage:
* Zero-dollar ($0) health insurance premiums for BCG employees, spouses, and children
* Low $10 (USD) copays for trips to the doctor, urgent care visits and prescriptions for generic drugs
* Dental coverage, including up to $5,000 in orthodontia benefits
* Vision insurance with coverage for both glasses and contact lenses annually
* Reimbursement for gym memberships and other fitness activities
* Fully vested Profit-Sharing Retirement Fund contributions made annually, whether you contribute or not, plus the option for employees to make personal contributions to a 401(k) plan
* Paid Parental Leave and other family benefits such as elective egg freezing, surrogacy, and adoption reimbursement
* Generous paid time off including 12 holidays per year, an annual office closure between Christmas and New Years, and 15 vacation days per year (earned at 1.25 days per month)
* Paid sick time on an as needed basis
* Employees, spouses, and children are covered at no cost. Employees share in the cost of domestic partner coverage.
Boston Consulting Group is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, age, religion, sex, sexual orientation, gender identity / expression, national origin, disability, protected veteran status, or any other characteristic protected under national, provincial, or local law, where applicable, and those with criminal histories will be considered in a manner consistent with applicable state and local laws.
BCG is an E - Verify Employer. Click here for more information on E-Verify.
Data Scientist - Medical document analysis
Data engineer job in Pittsburgh, PA
Thank you for your interest in joining Solventum. Solventum is a new healthcare company with a long legacy of solving big challenges that improve lives and help healthcare professionals perform at their best. At Solventum, people are at the heart of every innovation we pursue. Guided by empathy, insight, and clinical intelligence, we collaborate with the best minds in healthcare to address our customers' toughest challenges. While we continue updating the Solventum Careers Page and applicant materials, some documents may still reflect legacy branding. Please note that all listed roles are Solventum positions, and our Privacy Policy: *************************************************************************************** applies to any personal information you submit. As it was with 3M, at Solventum all qualified applicants will receive consideration for employment without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran.
Job Description:
Data Scientist for Medical Document Analysis (Solventum)
3M Health Care is now Solventum
At Solventum, we enable better, smarter, safer healthcare to improve lives. As a new company with a long legacy of creating breakthrough solutions for our customers' toughest challenges, we pioneer game-changing innovations at the intersection of health, material, and data science that change patients' lives for the better while enabling healthcare professionals to perform at their best. Because people, and their wellbeing, are at the heart of every scientific advancement we pursue.
We partner closely with the brightest minds in healthcare to ensure that every solution we create melds the latest technology with compassion and empathy. Because at Solventum, we never stop solving for you.
The Impact You'll Make in this Role:
As a Data Scientist specializing in medical document analysis, you will work at the forefront of healthcare NLP. You will design, implement, and evaluate advanced natural language understanding (NLU) systems that interpret complex clinical text and support data-driven medical decision making. This includes developing models grounded in transformers, generative AI, and research-oriented neural architectures.
You will collaborate closely with clinical domain experts, fellow data scientists, ML engineers, and product teams to bring research prototypes into production systems. You will explore new modeling approaches, optimize training pipelines, and contribute to the long-term direction of Solventum's deep learning portfolio.
You Will Make an Impact By:
* Scoping and designing NLP/NLU applications that support clinical understanding of medical documents and downstream healthcare workflows.
* Translating business and clinical requirements into mathematical and experimental criteria, ensuring models are measurable, reliable, and auditable.
* Processing, aligning, and enriching diverse structured and unstructured medical datasets for deep learning, LLMs, and agentic AI systems.
* Developing and evaluating deep learning models, including transformer-based architectures and neural architectures such as CNNs and attention mechanisms for capturing linguistic patterns in clinical text.
* Understanding and mitigating model weaknesses, including bias, drift, hallucination behavior, and robustness concerns.
* Designing experiments and error analyses that explain model behavior and enable clear communication with both technical and non-technical audiences.
* Deploying models into Solventum's cloud environment, partnering with ML engineering to ensure reliability, observability, and compliance within regulated healthcare systems.
* Staying current with emerging research in NLP/LLMs, evaluating new methods and identifying opportunities to advance Solventum's deep learning initiatives.
Your Skills and Expertise
To set you up for success in this role from day one, Solventum requires (at a minimum) the following qualifications:
* Master's degree or PhD in computer science, mathematics, or related fields or Bachelor's degree with at least 5 years of IT experience
* Solid experience in Python especially in deep learning for text analysis and libraries such as PyTorch and Transformers
* Solid grasp of statistics and exploratory data analysis
* US citizenship or permanent resident required
Additional qualifications that will help you succeed in the role:
* Experience conducting research-driven NLP/NLU work involving representation learning, attention mechanisms, or hybrid neural architectures.
* Ability to self-organize across multiple technical and business contexts, communicating complex findings with clarity and confidence.
* Experience extracting insights from complex clinical datasets and presenting those insights to varied audiences.
* Familiarity with AWS, GitHub, CI/CD, and scalable ML deployment practices.
* Hands-on experience with LLMs, prompting, fine-tuning, or agentic AI frameworks.
* Experience with ETL of large-scale text using tools such as PySpark, Spark NLP, or distributed data frameworks.
* Exposure to clinical coding systems or medical terminologies (e.g., ICD, CPT, SNOMED) is a plus.
Work location:
* Remote
Travel: Relocation Assistance: Not authorized
Must be legally authorized to work in country of employment without sponsorship for employment visa status (e.g., H1B status).
Supporting Your Well-being
Solventum offers many programs to help you live your best life - both physically and financially. To ensure competitive pay and benefits, Solventum regularly benchmarks with other companies that are comparable in size and scope.
Onboarding Requirement: To improve the onboarding experience, you will have an opportunity to meet with your manager and other new employees as part of the Solventum new employee orientation. As a result, new employees hired for this position will be required to travel to a designated company location for on-site onboarding during their initial days of employment. Travel arrangements and related expenses will be coordinated and paid for by the company in accordance with its travel policy. Applies to new hires with a start date of October 1st 2025 or later.
Applicable to US Applicants Only:The expected compensation range for this position is $119,076 - $145,537, which includes base pay plus variable incentive pay, if eligible. This range represents a good faith estimate for this position. The specific compensation offered to a candidate may vary based on factors including, but not limited to, the candidate's relevant knowledge, training, skills, work location, and/or experience. In addition, this position may be eligible for a range of benefits (e.g., Medical, Dental & Vision, Health Savings Accounts, Health Care & Dependent Care Flexible Spending Accounts, Disability Benefits, Life Insurance, Voluntary Benefits, Paid Absences and Retirement Benefits, etc.). Additional information is available at: ***********************************************************************
Responsibilities of this position include that corporate policies, procedures and security standards are complied with while performing assigned duties.
Solventum is committed to maintaining the highest standards of integrity and professionalism in our recruitment process. Applicants must remain alert to fraudulent job postings and recruitment schemes that falsely claim to represent Solventum and seek to exploit job seekers.
Please note that all email communications from Solventum regarding job opportunities with the company will be from an email with a domain *****************. Be wary of unsolicited emails or messages regarding Solventum job opportunities from emails with other email domains.
Solventum is an equal opportunity employer. Solventum will not discriminate against any applicant for employment on the basis of race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, or veteran status.
Please note: your application may not be considered if you do not provide your education and work history, either by: 1) uploading a resume, or 2) entering the information into the application fields directly.
Solventum Global Terms of Use and Privacy Statement
Carefully read these Terms of Use before using this website. Your access to and use of this website and application for a job at Solventum are conditioned on your acceptance and compliance with these terms.
Please access the linked document by clicking here, select the country where you are applying for employment, and review. Before submitting your application you will be asked to confirm your agreement with the
terms.
Auto-ApplyData Scientist - Medical document analysis
Data engineer job in Pittsburgh, PA
Thank you for your interest in joining Solventum. Solventum is a new healthcare company with a long legacy of solving big challenges that improve lives and help healthcare professionals perform at their best. At Solventum, people are at the heart of every innovation we pursue. Guided by empathy, insight, and clinical intelligence, we collaborate with the best minds in healthcare to address our customers' toughest challenges. While we continue updating the Solventum Careers Page and applicant materials, some documents may still reflect legacy branding. Please note that all listed roles are Solventum positions, and our Privacy Policy: *************************************************************************************** applies to any personal information you submit. As it was with 3M, at Solventum all qualified applicants will receive consideration for employment without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran.
Job Description:
Data Scientist for Medical Document Analysis (Solventum)
3M Health Care is now Solventum
At Solventum, we enable better, smarter, safer healthcare to improve lives. As a new company with a long legacy of creating breakthrough solutions for our customers' toughest challenges, we pioneer game-changing innovations at the intersection of health, material, and data science that change patients' lives for the better while enabling healthcare professionals to perform at their best. Because people, and their wellbeing, are at the heart of every scientific advancement we pursue.
We partner closely with the brightest minds in healthcare to ensure that every solution we create melds the latest technology with compassion and empathy. Because at Solventum, we never stop solving for you.
The Impact You'll Make in this Role:
As a Data Scientist specializing in medical document analysis, you will work at the forefront of healthcare NLP. You will design, implement, and evaluate advanced natural language understanding (NLU) systems that interpret complex clinical text and support data-driven medical decision making. This includes developing models grounded in transformers, generative AI, and research-oriented neural architectures.
You will collaborate closely with clinical domain experts, fellow data scientists, ML engineers, and product teams to bring research prototypes into production systems. You will explore new modeling approaches, optimize training pipelines, and contribute to the long-term direction of Solventum's deep learning portfolio.
You Will Make an Impact By:
Scoping and designing NLP/NLU applications that support clinical understanding of medical documents and downstream healthcare workflows.
Translating business and clinical requirements into mathematical and experimental criteria, ensuring models are measurable, reliable, and auditable.
Processing, aligning, and enriching diverse structured and unstructured medical datasets for deep learning, LLMs, and agentic AI systems.
Developing and evaluating deep learning models, including transformer-based architectures and neural architectures such as CNNs and attention mechanisms for capturing linguistic patterns in clinical text.
Understanding and mitigating model weaknesses, including bias, drift, hallucination behavior, and robustness concerns.
Designing experiments and error analyses that explain model behavior and enable clear communication with both technical and non-technical audiences.
Deploying models into Solventum's cloud environment, partnering with ML engineering to ensure reliability, observability, and compliance within regulated healthcare systems.
Staying current with emerging research in NLP/LLMs, evaluating new methods and identifying opportunities to advance Solventum's deep learning initiatives.
Your Skills and Expertise
To set you up for success in this role from day one, Solventum requires (at a minimum) the following qualifications:
Master's degree or PhD in computer science, mathematics, or related fields or Bachelor's degree with at least 5 years of IT experience
Solid experience in Python especially in deep learning for text analysis and libraries such as PyTorch and Transformers
Solid grasp of statistics and exploratory data analysis
US citizenship or permanent resident required
Additional qualifications that will help you succeed in the role:
Experience conducting research-driven NLP/NLU work involving representation learning, attention mechanisms, or hybrid neural architectures.
Ability to self-organize across multiple technical and business contexts, communicating complex findings with clarity and confidence.
Experience extracting insights from complex clinical datasets and presenting those insights to varied audiences.
Familiarity with AWS, GitHub, CI/CD, and scalable ML deployment practices.
Hands-on experience with LLMs, prompting, fine-tuning, or agentic AI frameworks.
Experience with ETL of large-scale text using tools such as PySpark, Spark NLP, or distributed data frameworks.
Exposure to clinical coding systems or medical terminologies (e.g., ICD, CPT, SNOMED) is a plus.
Work location:
Remote
Travel: Relocation Assistance: Not authorized
Must be legally authorized to work in country of employment without sponsorship for employment visa status (e.g., H1B status).
Supporting Your Well-being
Solventum offers many programs to help you live your best life - both physically and financially. To ensure competitive pay and benefits, Solventum regularly benchmarks with other companies that are comparable in size and scope.
Onboarding Requirement: To improve the onboarding experience, you will have an opportunity to meet with your manager and other new employees as part of the Solventum new employee orientation. As a result, new employees hired for this position will be required to travel to a designated company location for on-site onboarding during their initial days of employment. Travel arrangements and related expenses will be coordinated and paid for by the company in accordance with its travel policy. Applies to new hires with a start date of October 1st 2025 or later.Applicable to US Applicants Only:The expected compensation range for this position is $119,076 - $145,537, which includes base pay plus variable incentive pay, if eligible. This range represents a good faith estimate for this position. The specific compensation offered to a candidate may vary based on factors including, but not limited to, the candidate's relevant knowledge, training, skills, work location, and/or experience. In addition, this position may be eligible for a range of benefits (e.g., Medical, Dental & Vision, Health Savings Accounts, Health Care & Dependent Care Flexible Spending Accounts, Disability Benefits, Life Insurance, Voluntary Benefits, Paid Absences and Retirement Benefits, etc.). Additional information is available at: *************************************************************************************** of this position include that corporate policies, procedures and security standards are complied with while performing assigned duties.
Solventum is committed to maintaining the highest standards of integrity and professionalism in our recruitment process. Applicants must remain alert to fraudulent job postings and recruitment schemes that falsely claim to represent Solventum and seek to exploit job seekers.
Please note that all email communications from Solventum regarding job opportunities with the company will be from an email with a domain *****************. Be wary of unsolicited emails or messages regarding Solventum job opportunities from emails with other email domains.
Solventum is an equal opportunity employer. Solventum will not discriminate against any applicant for employment on the basis of race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, or veteran status.
Please note: your application may not be considered if you do not provide your education and work history, either by: 1) uploading a resume, or 2) entering the information into the application fields directly.
Solventum Global Terms of Use and Privacy Statement
Carefully read these Terms of Use before using this website. Your access to and use of this website and application for a job at Solventum are conditioned on your acceptance and compliance with these terms.
Please access the linked document by clicking here, select the country where you are applying for employment, and review. Before submitting your application you will be asked to confirm your agreement with the
terms.
Auto-ApplyData Scientist
Data engineer job in Pittsburgh, PA
Job Description
7,000 Diseases - 500 Treatments - 1 Rare Pharmacy
PANTHERx is the nation's largest rare disease pharmacy, and we put the patient experience at the top of everything that we do.
If you are looking for a career in the healthcare field that embraces authentic dedication to patient care, you don't need to look beyond PANTHERx. In every line of service, in every position and area of expertise, PANTHERx associates are driven to provide the highest quality outcomes for our patients.
We are seeking team members who:
Are inspired and compassionate problem solvers;
Produce high quality work;
Thrive in the excitement of the ever-challenging environment of modern medicine; and
Are committed to achieving superior health outcomes for people living with rare and devastating diseases.
At PANTHERx, we know our employees are the driving force in what we do. We cultivate talent and encourage growth within PANTHERx so that our associates can continue to explore their interests and expand their careers. Guided by our mission to provide uncompromising quality every day, we continue our strategic growth to further reach those affected by rare diseases.
Join the PANTHERx team, and define your own RxARE future in healthcare!
Location: Pittsburgh, PA (Hybrid or Remote)
Classification: Exempt
Status: Full-Time
Reports To: Data Scientist
Purpose
The Data Scientist is responsible for data modeling, model deployment, and model training, focusing on leveraging data to improve patient outcomes, optimize pharmacy operations, and support strategic decision-making. The ideal candidate will have experience in healthcare analytics, a strong foundation in statistical modeling and machine learning, and a passion for using data to drive innovation in specialty pharmacy services.
Responsibilities
Develops predictive models to support business needs. Identify patient adherence risks, optimize medication therapy management, etc.
Applies machine learning techniques to uncover patterns in patient behavior, treatment efficacy, and operational efficiency.
Translates complex data into actionable insights for pharmacy leadership and clinical teams.
Participates in the end-to-end lifecycle of AI projects, from ideation to model deployment and monitoring.
Supports strategic initiatives such as value-based care programs, payer reporting, and clinical trial analytics.
Works with data engineers to ensure data integrity, accessibility, and security across pharmacy systems.
Cleans, transforms, and validates large datasets from EMRs, claims, dispensing systems, and patient engagement platforms.
Works with Data Architect to evaluate and implement new tools, technologies, and methodologies to advance the organization's AI capabilities.
Communicates complex technical concepts and project outcomes to non-technical stakeholders, including executive leadership.
Champions data governance, quality, and security best practices across the organization.
Required Qualifications
Bachelor's degree in Computer Science, Data Science, Statistics, Mathematics, or a related field.
Minimum of one (1) year of professional experience in data science.
Proficiency in Python, R, SQL, and data visualization tools (e.g., Tableau, Power BI).
Strong understanding of data privacy, security, and compliance requirements (e.g., HIPAA, GDPR).
Excellent communication and stakeholder management skills.
Preferred Qualifications
Master's in Computer Science, Data Science, Statistics, Mathematics, or a related field.
Experience with healthcare data standards (HL7, FHIR, NCPDP) and familiarity with HIPAA compliance.
Experience with NLP for clinical text analysis.
Knowledge of payer-provider dynamics and specialty pharmacy reimbursement models.
Familiarity with the Azure cloud platform and Azure Databricks.
Work Environment
This position operates in a home or professional office environment. This role routinely uses standard office equipment such as computers, phones, photocopiers, filing cabinets and fax machines.
Physical Demands
While performing the duties of this job, the employee is regularly required to sit, see, talk, or hear. The employee frequently is required to stand; walk; use hands and fingers, handle or feel; and reach with hands and arms. Visual acuity is necessary for tasks such as reading or working with various forms of data for extended periods on a computer screen. Reasonable accommodation may be made to enable individuals with disabilities to perform the essential functions of the job.
Benefits:
Hybrid, remote and flexible on-site work schedules are available, based on the position. PANTHERx Rare Pharmacy also affords an excellent benefit package, including but not limited to medical, dental, vision, health savings and flexible spending accounts, 401K with employer matching, employer-paid life insurance and short/long term disability coverage, and an Employee Assistance Program! Generous paid time off is also available to all full-time employees, as well as limited paid time off for part-time employees. Of course we offer paid holidays too!
Equal Opportunity:
PANTHERx Rare Pharmacy is an equal opportunity employer, and does not discriminate in recruiting, hiring, promotions or any term or condition of employment based on race, age, religion, gender, ethnicity, sexual orientation, gender identity, disability, protected veteran's status, or any other characteristic protected by federal, state or local laws.
Product Data Scientist
Data engineer job in Pittsburgh, PA
Job Description
About the Company
At Bloomfield, we are revolutionizing the way crops are monitored and managed. Our AI-powered imaging technology provides continuous, plant-level health and performance insights from seed to harvest. Our mission is to empower farmers with the tools they need to increase crop productivity and quality while using fewer scarce resources, ultimately contributing to a more sustainable and food-secure future.
In 2024, Kubota Corporation, a global leader in agricultural machinery and solutions, through its North American subsidiary, Kubota North America Corporation, acquired Bloomfield. This acquisition unites Bloomfield's innovative technology with Kubota's extensive resources and commitment to provide comprehensive smart agriculture solutions to farmers worldwide. Our combined expertise and resources will drive innovation and deliver benefits to farmers, ensuring a more sustainable and prosperous agricultural industry.
About the Role
Our AI team (7 engineers and scientists, growing) builds the models, data pipelines, and measurement frameworks that power our data products.
We are now reorganizing the team into AI Platform and AI Product Streams. Each product stream will include two AI engineers focused on delivering customer-facing features end-to-end.
This role sits on a Product Stream and focuses on turning real-world plant imagery into reliable data products for growers.
We are looking for an AI Engineer - Product Data Scientist who is passionate about plant-based data, model development, and delivering real value to end-users.
You will take an ask from the Product team (e.g., “We are counting citrus; we want to estimate their weight.”) and drive the early phases of the ML lifecycle:
data selection
annotation strategy
data and annotation quality and exploration
metric definition
model selection and training
This role is highly collaborative, hands-on with data, and focused on building AI capabilities that directly impact customers' operations.
Responsibilities:
Work closely with AI, Product, and Engineering teams to build AI product focusing on value delivered to customer
Own the early ML lifecycle: data exploration, curation, annotation strategy, metric design, model prototyping, and experimentation
Build quantitative and qualitative frameworks to describe plant behavior and crop characteristics
Translate raw plant imagery and sensor data into business insights for specialty crops (e.g., citrus, grapes, berries)
Partner with the AI Platform and Data Engineering teams to integrate algorithms into the production data pipeline
Contribute to shaping our new AI Product Stream structure and workflows
Qualifications:
Demonstrated ability to explore and analyze image data to design and implement the early stages of a ML model lifecycle (data selection, exploration, cleaning, model design and training, performance evaluation)
comfort working with real-world, messy data
Solid Python experience
Solid knowledge in Machine Learning and Computer Vision
ability to design, conduct, and interpret experiments using appropriate statistical and scientific methods
A product mindset - willing to iterate with users, look at thousands of images, define metrics, and refine models
Bonus Points:
knowledge of plant physiology, plant development, agronomy, or related discipline Experience with high-throughput phenotyping, crop estimation, or yield prediction
Experience working with imagery or sensor data in agricultural environments
Experience in Deep Learning
Experience in C++
What We Offer
In addition to the opportunity to apply and develop your skills toward key business objectives, we offer an excellent compensation package including:
Competitive base salary & bonus
Opportunity to shape the future of AI in agriculture within a small, focused team backed by Kubota
Bloomfield is an equal opportunity employer. We consider qualified applicants without regard to race, color, religion, sex, national origin, sexual orientation, disability, gender identity, protected veteran status, or other protected classes.
Powered by JazzHR
aKxI1KPL33
Data Scientist
Data engineer job in Pittsburgh, PA
* Bachelor's Degree in STEM (science, technology, engineering, math) related field or a similar quantitative analytics field. * Up to 4 years of professional experience including exposure to data science concepts and computational tools (i.e., Python, Spark (PySpark), SQL, Hadoop, etc.). Professional, internship or project-based experience preferred.
* Familiarity with varying database structures and experience working with large datasets.
* Exposure to data visualization technology and capabilities (i.e., Power BI, Tableau).
* Exposure to a variety of data products that will complement data preparation, algorithm models and visualization development (i.e., Panda Libraries, Jupyter Notebooks, Power BI, etc.).
* Intermediate technical and analytical abilities with programming skills in a language such as Python.
* Basic understanding of the techniques to collect, organize, blend, cleanse and synthesize large volumes of disparate data.
* Ability to effectively communicate "data stories" to other non-technical business partners.
The experience level of the candidate will determine level of position as Data Scientist or Sr. Data Scientist
MAJOR DUTIES:
* Develop knowledge and then utilize a range of data science tools and techniques including Azure Cloud Data & Advanced Analytics tech stack, Spark via Databricks, Azure Machine Learning and PowerBI/Tableau.
* Participate in the development of documentation; coordinate closely with team members and leadership to communicate project statuses and initiatives.
* Design new data science solutions and implement research ideas and trading algorithms to drive insights consumed by analysts and portfolio managers.
* Work in an agile, collaborative environment, partnering with other data scientists, data engineers, and data analysts of all backgrounds and disciplines to bring data and analytics to life.
* Assist development of data pipeline programs and perform analysis on alternative and traditional datasets to develop investment factors and insights using machine learning and quantitative methods.
* Identify and develop custom data models and the appropriate algorithmic methods (regression models, classification, tree-based methods, text mining, natural language processing, unsupervised learning such as clustering, etc.) to support all stages of the data science lifecycle cycle.
* Build compelling, clear, and powerful visualizations that are useful and appealing to users.
* Prepare written and verbal communications along with preparing and delivering data science artifacts (abstract, data sources / data dictionaries, code library, research / findings, modeling / deployment report, new ideas/next steps).
* Responsible for staying on top of analytical techniques such as machine learning, deep learning and text analytics as they rapidly evolve.
HOURS/LOCATION:
* 8:30 a.m. - 5:00 p.m. (Overtime as required)
* Pittsburgh, PA 15222
* Hybrid Location (office/remote)
EXPLANATORY COMMENTS:
* Excellent analytical skills with the ability to understand business issues/strategy
* Demonstrate the ability to work independently as a self-starter, to drive and contribute to success
* Excellent problem solving, decision-making and project management skills
Data Scientist
Data engineer job in Pittsburgh, PA
We are seeking an inquisitive data scientist to join our team and work with our various datasets to find connections, knowledge, and potential issues in order to help our government clients make more informed decisions. Your role on the Data Science team will be primarily focused on our large datasets, and you will design and implement scalable statistical systems based on your analysis. You will have the opportunity to expand and grow data -driven research across our company, and lead new areas to apply advanced analytics to drive business results. As part of our team, you must be a data nerd with a strong understanding of the various fundamentals of data science and analysis and know how to bring data to life.
In order to do this job well, you must be a highly organized problem -solver and possess excellent oral and written communication skills. You are independent, driven, and motivated to jump in and roll up your sleeves to get the job done. You lead by influence and motivation. You have a passion for great work and nothing less than your best will do. You share our intolerance of mediocrity. You're uber -smart, challenged by figuring things out and producing simple solutions to complex problems. Knowing there are always multiple answers to a problem, you know how to engage in a constructive dialogue to find the best path forward. You're scrappy. We like scrappy. We need a creative, out -of -the -box thinker who shares our passion and obsession with quality.
This role is a full -time position located out of our office in Pittsburgh, PA.
This role may require up to 25% travel
Scope of Responsibilities
Design experiments, test hypotheses, and build models for advanced data analysis and complex algorithms
Apply advanced statistical and predictive modeling techniques to build, maintain, and improve multiple real -time decision systems
Make strategic recommendations on data collection, integration, and retention requirements, incorporating business requirements and knowledge of data industry best practices
Model and frame business scenarios that are meaningful and which impact critical processes and decisions; transform, standardize, and integrate datasets for client use cases
Convert custom, complex and manual client data analysis tasks into repeatable, configurable processes for consistent and scalable use within the SaaS platform
Optimize processes for maximum speed, performance, and accuracy; craft clean, testable, and maintainable code
Partner with internal business analysts and external client teams to seek out the best solutions regarding data -driven problem solving
Participate in end -to -end software development, on an agile team in a scrum process, collaborating closely with fellow software, machine learning, data, and QA engineers
RequirementsQualifications
US Citizenship is Required
Required Skills:
Bachelor's degree in Computer Science, Computer Engineering, Mathematics, Statistics, or a related field; Master's or PhD preferred
Multiple years of hands -on data science experience
Multiple deriving key insights and KPIs for external and internal customers
Regular development experience in Python
Prior hands -on experience working with data -driven analytics
Proven ability to develop solutions to loosely defined business problems by leveraging pattern detection over large datasets
Proficiency in statistical analysis, quantitative analytics, forecasting/predictive analytics, multivariate testing, and optimization algorithms
Experience using machine learning algorithms (e.g., gradient -boosted machines, neural networks)
Ability to work independently with little supervision
Strong communication and interpersonal skills
A burning desire to work in a challenging fast -paced environment
Desired Skills:
Current possession of a U.S. security clearance, or the ability to obtain one with our sponsorship
Experience in or exposure to the nuances of a startup or other entrepreneurial environment
Experience working on agile/scrum teams
Experience building datasets from common database tools using flavors of SQL
Expertise with automation and streaming data
Experience with major NLP frameworks (spa Cy, fasttext, BERT) Familiarity with big data frameworks (e.g., HDFS, Spark) and AWS
Familiarity with Git source control management
Experience working in a product organization
Experience analyzing financial, supply chain/logistics, or intellectual property data
We firmly believe that past performance is the best indicator of future performance. If you thrive while building solutions to complex problems, are a self -starter, and are passionate about making an impact in global security, we're eager to hear from you.
Data Scientist
Data engineer job in Pittsburgh, PA
Govini transforms Defense Acquisition from an outdated manual process to a software-driven strategic advantage for the United States. Our flagship product, Ark, supports Supply Chain, Science and Technology, Production, Sustainment, Logistics, and Modernization teams with AI-enabled Applications and best-in-class data to more rapidly imagine, develop, and field the capabilities we need. Today, the national security community and every branch of the military rely on Govini to enable faster and more informed Acquisition decisions.
Job Description
We are seeking an inquisitive data scientist to join our team and work with our various datasets to find connections, knowledge, and potential issues in order to help our government clients make more informed decisions. Your role on the Data Science team will be primarily focused on our large datasets, and you will design and implement scalable statistical systems based on your analysis. You will have the opportunity to expand and grow data-driven research across Govini, and lead new areas to apply advanced analytics to drive business results. As part of our team, you must be a data nerd with a strong understanding of the various fundamentals of data science and analysis and know how to bring data to life.
In order to do this job well, you must be a highly organized problem-solver and possess excellent oral and written communication skills. You are independent, driven, and motivated to jump in and roll up your sleeves to get the job done. You lead by influence and motivation. You have a passion for great work and nothing less than your best will do. You share our intolerance of mediocrity. You're uber-smart, challenged by figuring things out and producing simple solutions to complex problems. Knowing there are always multiple answers to a problem, you know how to engage in a constructive dialogue to find the best path forward. You're scrappy. We like scrappy. We need a creative, out-of-the-box thinker who shares our passion and obsession with quality.
This role is a full-time position located out of our office in Pittsburgh, PA. This role may require up to 25% travel Scope of Responsibilities
Design experiments, test hypotheses, and build models for advanced data analysis and complex algorithms
Apply advanced statistical and predictive modeling techniques to build, maintain, and improve multiple real-time decision systems
Make strategic recommendations on data collection, integration, and retention requirements, incorporating business requirements and knowledge of data industry best practices
Model and frame business scenarios that are meaningful and which impact critical processes and decisions; transform, standardize, and integrate datasets for client use cases
Convert custom, complex and manual client data analysis tasks into repeatable, configurable processes for consistent and scalable use within the Govini SaaS platform
Optimize processes for maximum speed, performance, and accuracy; craft clean, testable, and maintainable code
Partner with internal Govini business analysts and external client teams to seek out the best solutions regarding data-driven problem solving
Participate in end-to-end software development, on an agile team in a scrum process, collaborating closely with fellow software, machine learning, data, and QA engineers
Qualifications
US Citizenship is Required
Required Skills:
Bachelor's degree in Computer Science, Computer Engineering, Mathematics, Statistics, or a related field; Master's or PhD preferred
Minimum 3 years of hands-on data science experience
Minimum 3 years deriving key insights and KPIs for external and internal customers
Regular development experience in Python
Prior hands-on experience working with data-driven analytics
Proven ability to develop solutions to loosely defined business problems by leveraging pattern detection over large datasets
Proficiency in statistical analysis, quantitative analytics, forecasting/predictive analytics, multivariate testing, and optimization algorithms
Experience using machine learning algorithms (e.g., gradient-boosted machines, neural networks)
Ability to work independently with little supervision
Strong communication and interpersonal skills
A burning desire to work in a challenging fast-paced environment
Desired Skills:
Current possession of a U.S. security clearance, or the ability to obtain one with our sponsorship
Experience in or exposure to the nuances of a startup or other entrepreneurial environment
Experience working on agile/scrum teams
Experience building datasets from common database tools using flavors of SQL
Expertise with automation and streaming data
Experience with major NLP frameworks (spa Cy, fasttext, BERT) Familiarity with big data frameworks (e.g., HDFS, Spark) and AWS
Familiarity with Git source control management
Experience working in a product organization
Experience analyzing financial, supply chain/logistics, or intellectual property data
We firmly believe that past performance is the best indicator of future performance. If you thrive while building solutions to complex problems, are a self-starter, and are passionate about making an impact in global security, we're eager to hear from you.
Govini is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans status or any other characteristic protected by law.
Auto-ApplyData Scientist III - Pittsburgh, PA
Data engineer job in Pittsburgh, PA
On behalf of VetJobs/MilitarySpouseJobs, thank you for your interest. We are assisting our partnering company, listed below, with this position. It is open to Veterans, Transitioning Military, National Guard Members, Military Spouses, Wounded Warriors, and their Caregivers. If you have the required skill set, education requirements, and experience, please click the submit button and follow for next steps.
Job Description Contribute to develop data analysis plans. Statistical programming using statistical software for data analysis. Communicate analytical findings to diverse audiences including Principal Investigators, research partners and colleagues. Independently work on projects with minimal input from PI. Lead development, dissemination, and implementation of reproducible statistical programs for a multi-state, multi-year study. Assist in manuscript preparation (graphs, tables, figures) for publication, grant proposals. Perform data curation and analysis on billions of claims and electronic health records, including Medicaid, NVSS Birth Certificate data to create analytic data cohorts (study cohorts). Apply machine learning methods on clinical datasets to identify predictive factors - selecting algorithms, preprocessing data, training models, and evaluating performance to generate actionable insights.
Requirements: Bachelor's degree in Biostatistics, Data Analytics, Mathematics, or a related field plus 1 year of experience as a Data Scientist III or a Data Analyst or related occupation.
Offered salary: $81,883 per year.
Bachelor's degree in Biostatistics, Data Analytics, Mathematics, or a related field plus 1 year of experience as a Data Scientist III or a Data Analyst or related occupation.
The University of Pittsburgh is committed to championing all aspects of diversity, equity, inclusion, and accessibility within our community. This commitment is a fundamental value of the University and is crucial in helping us advance our mission, which includes attracting and retaining diverse workforces. We will continue to create and maintain an environment that allows individuals to discover, belong, contribute, and grow, while honoring the experiences, perspectives, and unique identities of all.
The University of Pittsburgh is an Affirmative Action/Equal Opportunity Employer and values equality of opportunity, human dignity and diversity. EOE, including disability/vets.
Assignment Category Full-time temporary
Campus Pittsburgh
Minimum Education Level Required Bachelor's Degree
Minimum Years of Experience Required 1
Hiring Range TBD Based on Qualifications
Background Check For position finalists, employment with the University will require successful completion of a background check
Child Protection Clearances Not Applicable
Required Documents Resume, Cover Letter
Staff Safety Data Scientist, Safety Analysis
Data engineer job in Pittsburgh, PA
We are searching for a Staff Safety Data Scientist on the Safety Analysis team who is a technical leader and go-to expert for risk and safety guidance, leveraging deep expertise in data science to lead critical safety research. Your insights will drive the safety strategy and external safety communications, including the development of industry-leading, benchmark safety studies and frameworks. The role requires a strong background in risk and hazard assessment and exceptional communication and interpersonal skills. You will be responsible for applying advanced statistical analysis and probabilistic modeling to support the safety case, inform hardware and software decisions, and identify critical risk factors. You will collaborate with diverse stakeholders across engineering, operations, and product.
In This Role, You Will:
Lead the development of novel quantitative data analytics using both proprietary (Aurora-logged, sensor, system, integration testing data) and publicly available data (CRSS, FARS, state-level information).
Author and present technical analyses and findings to diverse internal and external audiences, including stakeholders, authoritative bodies, and industry forums.
Design and automate data collection and analysis to support ongoing safety programs.
Extract insights from historical system safety performance to develop leading indicators for future performance forecasting. Analyze safety data from operational vehicles, crash metrics, and near-miss incidents to inform safety strategies and decision-making.
Develop statistical models and algorithms to predict potential risks and prevent incidents, improving the safety performance of autonomous systems.
Model self-driving vehicle behaviors at the system and subsystem levels.
Develop and maintain reports and expressions of baseline risk coverage and application in operations.
Design and develop expressions of risk benchmarking from similar industries.
Create dashboards and reports for communicating risk, ranking, and anomalies.
Present safety research findings to senior leadership and recommend actionable improvements to safety protocols and operational systems.
Mentor and lead junior team members, fostering their professional growth.
Required Qualifications
Bachelor's degree in Data Science, Statistics, Mathematics, Physics, Engineering, Computer Science, or equivalent applicable technical experience.
7+ years of progressive experience solving large-scale complex problems.
Demonstrated experience in a safety related domain (e.g., transportation, aerospace, robotics, medical devices).
Strong understanding of safety principles and risk assessment methodologies; with a proven track record of using data science techniques to solve safety challenges and mitigate risks in a dynamic environment.
Deep command of statistical methods, probabilistic modeling, and performing rapid exploratory data analysis.
Expertise in data science tools (e.g., Python, SQL, R), statistical modeling, machine learning, predictive analytics, and visualization software (e.g., Tableau, Power BI).
Adept at querying, analyzing, and visualizing large datasets.
Excellent communication and presentation skills, with the ability to convey complex information to various audiences.
Strong leadership skills, with the ability to provide guidance and mentorship.
Ability to work independently, as part of a cross-functional team, or as a project lead.
Desirable Qualifications
Advanced degree in Statistics, Mathematics, Physics, Engineering, Computer Science, or equivalent applicable technical experience.
Experience working in safety and risk, ideally complemented by a portfolio of publications, reports, or conference presentations.
Familiarity with rapidly scaling operational environments.
Proficient in working with advanced data transformation tools such as DBT.
Skilled in building and deploying using Amazon Web Services (AWS) tools.
Background in the Autonomous Vehicles, Aerospace, or Robotics domains.
The base salary range for this position is $171,000 - $247,000 per year. Aurora's pay ranges are determined by role, level, and location. Within the range, the successful candidate's starting base pay will be determined based on factors including job-related skills, experience, qualifications, relevant education or training, and market conditions. These ranges may be modified in the future. The successful candidate will also be eligible for an annual bonus, equity compensation, and benefits.
Auto-ApplyData Scientist
Data engineer job in Toronto, OH
We are looking for a data scientist that will help us discover the information hidden in vast amounts of data, and help us make smarter decisions to deliver even better products. Your primary focus will be in applying data mining techniques, doing statistical analysis, and building high quality prediction systems integrated with our products.
Responsibilities
Selecting features, building and optimizing classifiers using machine learning techniques
Data mining using state-of-the-art methods
Extending company's data with third party sources of information when needed
Enhancing data collection procedures to include information that is relevant for building analytic systems
Processing, cleansing, and verifying the integrity of data used for analysis
Doing ad-hoc analysis and presenting results in a clear manner
Creating automated anomaly detection systems and constant tracking of its performance
Auto-Apply