Senior Data Engineer
Data engineer job in Bethlehem, PA
Hybrid (Bethlehem, PA)
Contract
Applicants must be authorized to work in the U.S. without sponsorship
We're looking for a Senior Data Engineer to join our growing technology team and help shape the future of our enterprise data landscape. This is a hands-on, high-impact opportunity to make recommendations, build and evolve a modern data platform using Snowflake and cloud-based EDW Solutions.
How You'll Impact Results:
Drive the evolution and architecture of scalable, secure, cloud-native data platforms
Design, build, and maintain data models, pipelines, and integration patterns across the data lake, data warehouse, and consumption layers
Lead deployment of long-term data products and infuse data and analytics capabilities across business and IT
Optimize data pipelines and warehouse performance for accuracy, accessibility, and speed
Collaborate cross-functionally to deliver data, experimentation, and analytics solutions
Implement systems to monitor data quality and ensure reliability and availability of Production data for downstream users, leadership teams, and business processes
Recommend and implement best practices for query performance, storage, and resource efficiency
Test and clearly document data assets, pipelines, and architecture to support usability and scale
Engage across project phases and serve as a key contributor in strategic data architecture initiatives
Your Qualifications That Will Ensure Success:
Required:
10+ years of experience in Information Technology Data Engineering:
professional database and data warehouse development
Advanced proficiency in SQL, data modeling, and performance tuning
Experience in system configuration, security administration, and performance optimization
Deep experience required with Snowflake and modern cloud data platforms (AWS, Azure, or GCP)
Familiarity with developing cloud data applications (AWS, Azure, Google Cloud) and/or standard CI/CD tools like Azure DevOps or GitHub
Strong analytical, problem-solving, and documentation skills
Experience in system configuration, security administration, and performance optimization
Proficiency with Microsoft Excel and common data analysis tools
Ability to troubleshoot technical issues and provide system support to non-technical users.
Preferred:
Experience integrating SAP ECC data into cloud-native platforms
Exposure to AI/ML, API development, or Boomi Atmosphere
Prior experience in consumer packaged goods (CPG), Food / Beverage industry, or manufacturing
Data Scientist, Agentic AI (Insurance Underwriting)
Data engineer job in Bethlehem, PA
Guardian is on a transformation journey to evolve into a modern, forward-thinking insurance company committed to enhancing the wellbeing of its customers and their families. This role presents a distinctive opportunity to drive real-world impact by applying cutting-edge AI to transform how Guardian does business.
Guardian's Data & AI team spearheads a culture of intelligence and automation across the enterprise, creating business value from advanced data and AI solutions. Our team includes data scientists, engineers, analysts, and product leaders working together to deliver AI-driven products that power growth, improve risk management, and elevate customer experience.
Guardian created the Data Science Lab (DSL) to reimagine insurance in light of emerging technology, evolving consumer needs, and rapid advances in AI. The DSL expedites Guardian's transition to data-driven decision making and fosters innovation by rapidly testing, scaling, and operationalizing state-of-the-art AI.
We are seeking a Data Scientist, Agentic AI-an experienced individual contributor with strong experience in Agentic AI, large language models (LLMs), and natural language processing (NLP) and a track record of turning advanced research into practical, impactful enterprise solutions. This role focuses on building, deploying, and scaling agentic AI systems, large language models, and intelligent automation solutions that reshape how Guardian operates, serves customers, and drives growth. You'll collaborate directly with senior executives on high-visibility projects to bring next-generation AI to life across Guardian's products and services.
**You Will:**
Key Responsibilities
+ Design and implement Agentic AI solutions that automate business workflows, improve decision-making, and enhance customer and employee experiences.
+ Apply LLMs and generative AI to process and interpret unstructured data such as contracts, underwriting notes, claims, medical records, and customer interactions.
+ Develop autonomous agents and reasoning systems that integrate with Guardian's platforms to deliver measurable business outcomes.
+ Collaborate with data engineers and AIOps teams to ensure models are scalable, robust, and production-ready.
+ Translate research in agentic AI and reinforcement learning into practical applications for underwriting, claims automation, customer servicing, and risk assessment.
+ Work closely with product owners, engineers, and business stakeholders to define use cases, design solutions, and measure impact.
+ Contribute to the Data Science Lab by building reusable components and frameworks for developing and deploying agentic AI solutions.
+ Adhere to AI and LLM governance, documentation, testing, and other best practices in partnership with key stakeholders.
**You are:**
+ Passionate about applying advanced AI techniques to solve real-world business challenges.
+ Curious about agentic AI, autonomous systems, and LLM-based solutions that transform industries.
+ A hands-on builder who enjoys moving solutions from prototype to deployment.
+ Comfortable collaborating in cross-functional teams and aligning technical solutions with business goals.
**You have:**
+ PhD with 0-1 years of experience, Master's degree with 2+ years, or Bachelor's degree with 4+ years in Statistics, Computer Science, Engineering, Applied Mathematics, or related field.
+ Experience in insurance industry (Underwriting Experience is Preferred)
+ 2+ years of hands-on experience in AI/ML modeling and development.
+ Solid understanding of probability, statistics, and machine learning fundamentals.
+ Strong programming skills in Python and familiarity with frameworks like PyTorch, TensorFlow, and LangGraph.
+ Experience with LLMs, generative AI, and multi-step reasoning systems.
+ Excellent problem-solving and analytical skills with attention to detail.
+ Strong communication skills and ability to collaborate effectively with product and engineering teams.
+ Working knowledge of core software engineering concepts (version control with Git/GitHub, testing, logging, ...).
+ Working knowledge of a variety of machine learning techniques (clustering, decision tree, bagging/boosting artificial neural networks, etc.) and their real-world advantages/drawbacks.
**Location:**
+ Three days a week at a Guardian office in New York, NY, Holmdel, NJ, Bethlehem, PA, or Boston, MA
**Work Authorization**
+ Guardian Life is not currently or in the foreseeable future sponsoring employment visas. In order to be a successful applicant, you must be legally authorized to work in the United States, without the need for employer sponsorship now or in the future.
**Salary Range:**
$95,170.00 - $156,355.00
The salary range reflected above is a good faith estimate of base pay for the primary location of the position. The salary for this position ultimately will be determined based on the education, experience, knowledge, and abilities of the successful candidate. In addition to salary, this role may also be eligible for annual, sales, or other incentive compensation.
**Our Promise**
At Guardian, you'll have the support and flexibility to achieve your professional and personal goals. Through skill-building, leadership development and philanthropic opportunities, we provide opportunities to build communities and grow your career, surrounded by diverse colleagues with high ethical standards.
**Inspire Well-Being**
As part of Guardian's Purpose - to inspire well-being - we are committed to offering contemporary, supportive, flexible, and inclusive benefits and resources to our colleagues. Explore our company benefits at *********************************************** . _Benefits apply to full-time eligible employees. Interns are not eligible for most Company benefits._
**Equal Employment Opportunity**
Guardian is an equal opportunity employer. All qualified applicants will be considered for employment without regard to age, race, color, creed, religion, sex, affectional or sexual orientation, national origin, ancestry, marital status, disability, military or veteran status, or any other classification protected by applicable law.
**Accommodations**
Guardian is committed to providing access, equal opportunity and reasonable accommodation for individuals with disabilities in employment, its services, programs, and activities. Guardian also provides reasonable accommodations to qualified job applicants (and employees) to accommodate the individual's known limitations related to pregnancy, childbirth, or related medical conditions, unless doing so would create an undue hardship. If reasonable accommodation is needed to participate in the job application or interview process, to perform essential job functions, and/or to receive other benefits and privileges of employment, please contact *************. Please note: this resource is for accommodation requests only. For all other inquires related to your application and careers at Guardian, refer to the Guardian Careers site.
**Visa Sponsorship**
Guardian is not currently or in the foreseeable future sponsoring employment visas. In order to be a successful applicant. you must be legally authorized to work in the United States, without the need for employer sponsorship.
**Current Guardian Colleagues: Please apply through the internal Jobs Hub in Workday.**
Every day, Guardian helps our 29 million customers realize their dreams through a range of insurance and financial products and services. Our Purpose, to inspire well-being, guides our dedication to the colleagues, consumers, and communities we serve. We know that people count, and we go above and beyond to prepare them for the life they want to live, focusing on their overall well-being - mind, body, and wallet. As one of the largest mutual insurance companies, we put our customers first. Behind every bright future is a GuardianTM. Learn more about Guardian at guardianlife.com .
Visa Sponsorship:
Guardian Life is not currently or in the foreseeable future sponsoring employment visas. In order to be a successful applicant, you must be legally authorized to work in the United States, without the need for employer sponsorship.
Qlik Data Engineer
Data engineer job in Easton, PA
Description SummaryThis position is NOT eligible for visa sponsorship. This role will specialize in building comprehensive data pipeline development and management, enabling our current Business Intelligence team to focus on analytics and business value while ensuring robust, scalable data integration solutions.Background and Current StateOur Business Intelligence team currently operates as a multi-disciplinary unit managing the complete data lifecycle from ingestion to visualization. The current structure requires our BI professionals to wear many hats, handling responsibilities that span data engineering, ETL/ELT development, data modeling, report creation, dashboard development, and business relationship management. While this approach has served us well in establishing our data capabilities, the increasing complexity of our data ecosystem and growing business demands have created capacity constraints and specialization challenges. Our data integration landscape has evolved significantly with the adoption of Qlik Data Integration and Qlik Talend Cloud Enterprise Edition. The current team's broad responsibilities limit the depth of specialization possible in any single area, particularly in the technical aspects of modern real-time data integration and the advanced features available in Qlik Talend Cloud Enterprise Edition. As our organization increasingly requires real-time analytics, operational reporting, and seamless data movement across hybrid cloud environments, we need dedicated expertise to ensure our Qlik platform delivers optimal performance and business value. Primary Job ResponsibilitiesData Integration Architecture and Engineering
Develop and maintain ETL/ELT data pipelines leveraging Qlik Data Integration for data warehouse generation in bronze, silver, gold layers
Build consumer facing datamarts, views, and push-down calculations to enable improved analytics by BI team and Citizen Developers
Implement enterprise data integration patterns supporting batch, real-time, and hybrid processing requirements
Coordinate execution of and monitor pipelines to ensure timely reload of EDW
Technical Implementation and Platform Management
Configure and manage Qlik Data Integration components including pipeline projects, lineage, data catalog, data quality, and data marketplace
Implement data quality rules and monitoring using Qlik and Talend tools
Manage Qlik Tenant, security, access and manage Data Movement Gate way
Performance, Monitoring, Governance and Management
Monitor and optimize data replication performance, latency, and throughput across all integration points
Implement comprehensive logging, alerting, and performance monitoring
Conduct regular performance audits and capacity planning for integration infrastructure
Establish SLA monitoring and automated recovery procedures for critical data flows
Collaboration and Enterprise Support
Provide technical expertise on Qlik Data Integration best practices and enterprise patterns
Support database administrators and infrastructure teams with replication and integration architecture
Lead technical discussions on data strategy and platform roadmap decisions
Key QualificationsRequired Skills
Bachelor's degree in Computer Science, Information Systems, or related technical field
4+ years of experience in enterprise data integration with at least 2 years of hands-on Qlik or Talend experience
Strong understanding of change data capture (CDC) technologies and real-time data streaming concepts
Strong understanding of data lake and data warehouse strategies, and data modelling
Advanced SQL skills with expertise in database replication, synchronization, and performance tuning
Experience with enterprise ETL/ELT tools and data integration patterns
Proficiency in at least one programming language (Java, Python, or SQL scripting)
Preferred Qualifications
Qlik Data Integration certification or Talend certification (Data Integration, Data Quality, or Big Data)
Experience with cloud platforms (AWS or Azure) and hybrid integration scenarios
Experience with Snowflake preferred
Understanding of data governance frameworks and regulatory compliance requirements
Experience with API management and microservices architecture
Soft Skills
Strong analytical and troubleshooting capabilities with complex integration challenges
Excellent communication skills with ability to explain technical integration concepts to business stakeholders
Collaborative approach with experience in cross-functional enterprise teams
Detail-oriented mindset with commitment to data accuracy and system reliability
Adaptability to evolving integration requirements and emerging technologies
This position is NOT eligible for visa sponsorship.
EEO Statement: Victaulic is an Equal Employment Opportunity (EOE/M/F/Vets/Disabled) employer and welcomes all qualified applicants. Applicants will receive fair and impartial consideration without regard to race, gender, color, religion, national origin, age, disability, veteran status, sexual orientation, genetic data, or other legally protected status
#TOP123 #LI-KM1 #LI-REMOTE Victaulic Staffing Partner Communication Policy
All staffing agencies are strictly forbidden from directly contacting any Victaulic employees, except those within the Human Resources/Talent Acquisition team. All communications, inquiries and candidate submissions must be routed through Victaulic's Human Resources/Talent Acquisition team. Non-compliance with this policy may result in the suspension of partnership, cancellation of the current contract, and/or the imposition of a mandatory probation period before any future business can resume. Additionally, non-compliance may lead to a permanent ban on future business. This policy ensures a streamlined and compliant recruitment process.
Auto-ApplyAssociate Data Scientist
Data engineer job in Reading, PA
This role will help in the development of predictive analytics solutions. The Associate Data Scientist will work on 1-2 projects concurrently under the supervision of senior members of the Data Science team. The projects at this level will vary from small and straightforward to large and complex. Similarly, impact can range from team/department level up to organization-wide. The Associate Data Scientist provides analysis and assists in developing machine-learning models to solve business problems. Such efforts require cross-functional involvement from various disciplines across the organization. Major Responsibilities:
Data evaluation and analysis: • Assist in identifying appropriate data sources to answer business question • Extract, blend, cleanse, and organize data • Visualize data • Identify (and ameliorate) outliers and missing or incomplete records in the data
Model building and implementation • Participate in the identification of appropriate techniques and algorithms for building model • Create and test model • Build application and embed model
Communication • Engage in best practices discussions with other team members regarding modeling activities • Discuss project activities and results with team • Help in the documentation and presentation of project stories
Identify opportunities for savings and process improvement: • Collaborate with various stakeholders to identify business problems • Help conduct ROI analysis to determine project feasibility • Help build business case to solve such business problems • Other projects as assigned by the supervisor
Qualifications: • Masters degree required, concentration in Engineering, Operations Research, Statistics, Applied Math, Computer Science, or related quantitative field preferred • 1+ years experience along with a Master's degree in data or business analytics • 1 year of experience designing and building machine learning applications using either structured or unstructured datasets is required. Practical experience programming using Python, R or other high level scripting languages is required. Demonstrated experience with one or more machine learning techniques including logistic regression, decision trees, random forests and clustering is required. Knowledge and experience with SQL is preferred. • Experience in the following areas: -Machine learning (intermediate) -Statistical modeling (elementary) -Supervised learning (intermediate) -Statistical computing packages (e.g., R) (elementary) -Scripting languages (e.g., Python) (intermediate) • Ability to collect and analyze complex data • Must be able to translate data insights into action • Must be able to organize and prioritize work to meet multiple deadlines • Must be able to communicate effectively in both oral and written form • Strong communication skills required • Must have strong time management skills • Regular, predictable, full attendance is an essential function of the job • Willingness to travel as necessary, work the required schedule, work at the specific location required, complete Penske employment application, submit to a background investigation (to include past employment, education, and criminal history) and drug screening are required.
Physical Requirements: -The physical and mental demands described here are representative of those that must be met by an associate to successfully perform the essential functions of this job. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions. -The associate will be required to: read; communicate verbally and/or in written form; remember and analyze certain information; and remember and understand certain instructions or guidelines. -While performing the duties of this job, the associate may be required to stand, walk, and sit. The associate is frequently required to use hands to touch, handle, and feel, and to reach with hands and arms. The associate must be able to occasionally lift and/or move up to 25lbs/12kg. -Specific vision abilities required by this job include close vision, distance vision, peripheral vision, depth perception and the ability to adjust focus.
Penske is an Equal Opportunity Employer.
Auto-ApplyLead Data Insights Engineer - MedTech Surgery
Data engineer job in Raritan, NJ
At Johnson & Johnson, we believe health is everything. Our strength in healthcare innovation empowers us to build a world where complex diseases are prevented, treated, and cured, where treatments are smarter and less invasive, and solutions are personal. Through our expertise in Innovative Medicine and MedTech, we are uniquely positioned to innovate across the full spectrum of healthcare solutions today to deliver the breakthroughs of tomorrow, and profoundly impact health for humanity. Learn more at *******************
**Job Function:**
Data Analytics & Computational Sciences
**Job Sub** **Function:**
Data Engineering
**Job Category:**
Scientific/Technology
**All Job Posting Locations:**
Cincinnati, Ohio, United States of America, Raritan, New Jersey, United States of America
**Job Description:**
We are searching for the best talent for a **Lead Data Insights Engineer** . _The preferred location for this role is Raritan, NJ however candidates within a commutable distance of Cincinnati, OH will also be considered_ . This role will work a Flex/Hybrid schedule with 3 days per week in office. There is NO remote option. Please Note: Relocation Assistance is **not** available for this role.
**Purpose** : The Lead Data Insights Engineer will play a pivotal role & be responsible for architecting, developing, & maintaining the Gold Layer ETL processes that transform raw data into actionable business insights within our intelligence ecosystem. This role orchestrates complex end-to-end data pipelines, ensuring seamless integration between upstream Bronze/Silver layers & downstream analytics, while enforcing data governance, lineage, & scalability. The engineer will design & optimize business bays, defining data sources, transformation rules, & dynamic aggregation logic to support granular reporting. A key focus is collaborating with the "Center" Data Strategy team to align upstream processes, minimize technical debt, & future-proof. Additionally, the role will partner with business stakeholders (Marketing, Sales, Key Account Management) to co-prioritize data needs, build insights, & drive adoption of self-service analytics platforms. The engineer will partner with Data Science to innovate with AI/ML automation for data governance, work closely with IT to deploy scalable solutions, & ensure compliance with regulatory standards. By staying ahead of cloud analytics & AI trends, this role champions automation, eliminates manual inefficiencies, & enhances the intelligence ecosystem's performance, ultimately enabling strategy & insights-driven decision making & commercial execution across the organization.
**You will be responsible for** :
Data Management & ETL Orchestration:
+ Direct & innovate the end-to-end orchestration of the Gold layer ETL processes, owning complete Data Cleansing, Validation, Standardization, Enrichment, Aggregation & Formatting, ensuring performance, security, & scalability.
+ Build & maintain individual business bays with necessary data tables & business rules, logic, & mappings to transform raw data into dynamic insights.
+ Apply advanced data management techniques to aggregate & disaggregate insights dynamically, facilitating where possible granular levels of reporting.
+ Clearly document & organize data flow mapping, ensuring data lineage, storage, process, dependencies, integrity & governance are maintained throughout the transformation process.
Build, Foster & Maintain Key Partnerships:
+ Collaborate with the "Center" Data Strategy to identify data gaps, understand existing Enterprise Data Lakes' Bronze & Silver layer ETL processes & leverage extractions from upstream data tables, proactively future proofing to minimize maintenance efforts.
+ Engage the Business (Marketing, Sales, Key Account Management, Digital) to prioritize & define sources, requirements, rules, logic, & mapping for transforming raw data into dynamic insights, while providing fresh perspective of what else is possible.
+ Innovate with Data Science in ways to automate Data Governance of Business Mappings through AI/ML.
+ Co-shape with IT loading requirements for dynamic insights into a self-service platform & Commercial Intelligence Database.
+ Drive user acceptance & adoption in the utilization of the source of truth for Commercial Intelligence, Strategic Targeting, & Data Products Ecosystem (analytics, dashboards, & reports).
+ Communicate & cooperate with Legal, Privacy, & Compliance partners in alignment of all insights.
Consistent Learning & Innovation Champion:
+ Stay ahead of industry trends in data engineering, cloud analytics, and AI-driven insights.
+ Drive best practices in data management & recommend fresh approaches to enhance data solutions.
+ Develop business knowledge and understanding of JJMT processes and capabilities specific to market, competitive, and procedure volume datasets.
+ Champion automation & foster a cooperative environment conducive to innovation.
+ Responsible for ensuring personal & Company compliance with all Federal, State, local & Company regulations, policies, & procedures, as well as upholding all values of the J&J Credo.
**Qualifications / Requirements** :
+ Minimum of a Bachelor's Degree is **required** , Advanced Degree _strongly preferred_ . Statistics, Mathematics, Data Science, Computer Science or related field is **required** .
+ At least 3-5 years of post-academic work/industry experience is **required** ; medical technology or healthcare field is _highly desired_ .
+ Proficiency in Data Modeling (Azure, Data Factory, Databricks, Purview) is **required** .
+ Proficiency in ETL orchestration (Azure DevOps, CI/CD Pipelines) is **required** .
+ Proficiency in complex data transformation (SQL, Alteryx, Python, SAS, R) is **required** .
+ Experience in data flow mapping & advanced data visualization tools (Power BI, Tableau) is **required** .
+ Advanced Excel, PowerPoint and strong data management skills are **required** .
+ Understanding Data Models, MDM, Data Governance frameworks, integrating diverse data sources & building self-service analytics platforms is **required** .
+ Attention to detail and with extensive experience in aggregating data and formulating insights is **required** .
+ Experience with Microsoft Azure, Fabric, OneLake & PowerBI is _highly preferred_ .
+ Experience in Medical Technology is _highly preferred_ .
+ Proven leadership ability & driver of cross-functional collaboration is required.
+ Excellent verbal & written communication skills, particularly in presenting technical information to non-technical stakeholders, is required.
+ Effective project management skills with experience managing multiple projects simultaneously is required.
+ Strong intuition to proactively identify current problems & gaps to effectively minimize future maintenance is required.
+ Results oriented, a strong demonstration of translating business strategy into business insights is highly preferred.
+ Experience with Commercial systems (Compensation, Sales Analytics, Contracting systems) is preferred.
Johnson & Johnson is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, age, national origin, disability, protected veteran status or other characteristics protected by federal, state or local law. We actively seek qualified candidates who are protected veterans and individuals with disabilities as defined under VEVRAA and Section 503 of the Rehabilitation Act.
Johnson & Johnson is committed to providing an interview process that is inclusive of our applicants' needs. If you are an individual with a disability and would like to request an accommodation, please contact us via *******************/contact-us/careers or contact AskGS to be directed to your accommodation resource.
\#LI-Hybrid
**Required Skills:**
**Preferred Skills:**
**The anticipated base pay range for this position is :**
The base pay range for this position is $91,000 to $147,200.
Additional Description for Pay Transparency:
Subject to the terms of their respective plans, employees and/or eligible dependents are eligible to participate in the following Company sponsored employee benefit programs: medical, dental, vision, life insurance, short- and long-term disability, business accident insurance, and group legal insurance. Subject to the terms of their respective plans, employees are eligible to participate in the Company's consolidated retirement plan (pension) and savings plan (401(k)). Subject to the terms of their respective policies and date of hire, Employees are eligible for the following time off benefits: Vacation -120 hours per calendar year Sick time - 40 hours per calendar year; for employees who reside in the State of Washington -56 hours per calendar year Holiday pay, including Floating Holidays -13 days per calendar year Work, Personal and Family Time - up to 40 hours per calendar year Parental Leave - 480 hours within one year of the birth/adoption/foster care of a child Condolence Leave - 30 days for an immediate family member: 5 days for an extended family member Caregiver Leave - 10 days Volunteer Leave - 4 days Military Spouse Time-Off - 80 hours Additional information can be found through the link below. *********************************************
Research Scientist / Data Scientist - AI & Analytics
Data engineer job in Doylestown, PA
Adelphi Research | Advanced Market Research Group (AMG)
About the Role
Adelphi Research is seeking a Research Scientist/Data Scientist to join our Advanced Market Research Group. You'll work directly with the Senior Statistician to execute pharmaceutical market research analytics while also contributing to our growing portfolio of AI-powered research tools.
You'll spend significant time doing traditional statistical analysis for client projects (segmentation, conjoint analysis, message testing), but you'll also be a key contributor to building and testing new AI-enhanced methodologies and applications. If you're a strong quantitative researcher who's genuinely curious about AI's potential in market research-and has done some hands-on experimentation-this role offers a unique opportunity to shape the future of pharmaceutical insights.
What You'll Do
Client Analytics & Research Execution
Execute advanced statistical analyses for pharmaceutical market research studies:
Segmentation analysis (cluster analysis, latent class)
Conjoint analysis (discrete choice, MaxDiff)
Message testing and positioning research
Key driver analysis and predictive modeling
Perform data cleaning, weighting, and quality control for survey data
Create professional client deliverables: Simulators in excel and R shiny, PowerPoint, Excel outputs, crosstabs, etc.
Support project teams with statistical methodology recommendations
Collaborate on study design, questionnaire development, and analytical plans
AI Tool Development & Innovation
Contribute to building and testing R Shiny applications that enhance our research capabilities
Help develop AI-powered qualitative coding tools for open-ended survey responses
Test and refine chatbot interfaces that help clients interact with their data
Support validation studies for synthetic respondent methodologies (comparing AI-generated data to real respondent data)
Integrate LLM APIs into existing workflows where appropriate
Experiment with AI applications for research acceleration and automation
Document findings, create user guides, and support internal adoption
Required Qualifications
Core Technical Skills
Strong proficiency in R programming, including tidyverse ecosystem (dplyr, ggplot2, tidyr, purr, ellmer, vitals, ragnar)
OR strong Python skills (pandas, scikit-learn) with willingness to become proficient in R
R Shiny development experience
Statistical methods: Applied experience with regression, clustering, factor analysis, conjoint/MaxDiff, or similar techniques
Data manipulation: Comfortable working with complex survey data, messy datasets, weighting schemes
Some programming experience beyond scripting: You've built something-a dashboard, an analysis pipeline, a tool-not just run one-off analyses
AI Experience
Evidence of hands-on experimentation with AI tools beyond consumer use.
Built a small project using an LLM API (even if just a personal experiment)
Created a prototype chatbot or AI-enhanced tool
Worked through tutorials and actually implemented something with OpenAI/Anthropic/Google APIs
Experimented with prompt engineering in a programmatic way
Contributed to an AI-related project at work or school
Foundation & Experience
Master's degree in Statistics, Data Science, Biostatistics, Economics, or related quantitative field (PhD welcomed but not required)
2-4 years of experience in applied statistics, data science, or market research analytics
Experience with survey research methodologies preferred
Strong problem-solving skills and attention to detail
Excellent communication skills-ability to explain technical concepts to non-technical audiences
Comfort juggling multiple projects and shifting priorities
Nice-to-Haves (Valued but Not Required)
Domain & Methodology
Experience in pharmaceutical or healthcare market research
Understanding of survey design, sampling, and weighting approaches
Familiarity with FDA regulations or pharmaceutical compliance considerations
Background in segmentation, conjoint analysis, or MaxDiff studies
Technical & Tools
R package development or contribution to open-source projects
Familiarity with Azure, Google Cloud, or AWS
Python + R bilingual coding
Exposure to Bayesian methods or probabilistic models
Any experience with vector databases, RAG systems, or advanced AI architectures (rare but great if you have it)
Experience translating SPSS workflows to R
Learning & Development
Work directly with Senior Statisticians leading our AI initiatives
Exposure to diverse therapeutic areas across major pharmaceutical clients
Hands-on experience building production AI tools in a regulated industry
Professional development in both traditional research methods and emerging AI capabilities
Our Working Environment
You'll work directly with the Senior Statisticians and collaborate with:
Project teams serving major pharmaceutical clients
Data science colleagues across AMG
Qualitative researchers and strategic consultants
We value intellectual curiosity, pragmatic problem-solving, and a bias toward action. This is a role for someone who's equally comfortable running a factor analysis and experimenting with an LLM API-who sees the value in both rigorous statistical methods and emerging AI capabilities.
Tech Stack: R (tidyverse, Shiny, tidymodels), Azure OpenAI APIs, Google Gemini APIs, Posit Connect. Open to strong Python developers willing to become R-proficient.
To Apply: Please include any code samples, GitHub links, or portfolio pieces that demonstrate your analytical work. If you've built anything with AI/LLMs (even personal projects or experiments), we'd love to see it.
The range below represents the low and high end of the base salary someone in this role may earn as an employee of an Omnicom Health Group company in the United States. Salaries will vary based on various factors including but not limited to professional and academic experience, training, associated responsibilities, and other business and organizational needs. The range listed is just one component of our total compensation package for employees. Salary decisions are dependent on the circumstances of each hire.
$90,000 - $110,000
Omnicom Health is committed to hiring and developing exceptional talent. We agree that talent is uniquely distributed, and we're focused on developing inclusive teams that can bring the best solutions to everything we do. We strongly believe that celebrating what makes us different makes us better together. Join us-we look forward to getting to know you. We will process your personal data in accordance with our
Recruitment Privacy Notice
.
Auto-ApplyResearch Scientist / Data Scientist - AI & Analytics
Data engineer job in Doylestown, PA
Adelphi Research | Advanced Market Research Group (AMG)
About the Role
Adelphi Research is seeking a Research Scientist/Data Scientist to join our Advanced Market Research Group. You'll work directly with the Senior Statistician to execute pharmaceutical market research analytics while also contributing to our growing portfolio of AI-powered research tools.
You'll spend significant time doing traditional statistical analysis for client projects (segmentation, conjoint analysis, message testing), but you'll also be a key contributor to building and testing new AI-enhanced methodologies and applications. If you're a strong quantitative researcher who's genuinely curious about AI's potential in market research-and has done some hands-on experimentation-this role offers a unique opportunity to shape the future of pharmaceutical insights.
What You'll Do
Client Analytics & Research Execution
Execute advanced statistical analyses for pharmaceutical market research studies:
Segmentation analysis (cluster analysis, latent class)
Conjoint analysis (discrete choice, MaxDiff)
Message testing and positioning research
Key driver analysis and predictive modeling
Perform data cleaning, weighting, and quality control for survey data
Create professional client deliverables: Simulators in excel and R shiny, PowerPoint, Excel outputs, crosstabs, etc.
Support project teams with statistical methodology recommendations
Collaborate on study design, questionnaire development, and analytical plans
AI Tool Development & Innovation
Contribute to building and testing R Shiny applications that enhance our research capabilities
Help develop AI-powered qualitative coding tools for open-ended survey responses
Test and refine chatbot interfaces that help clients interact with their data
Support validation studies for synthetic respondent methodologies (comparing AI-generated data to real respondent data)
Integrate LLM APIs into existing workflows where appropriate
Experiment with AI applications for research acceleration and automation
Document findings, create user guides, and support internal adoption
Required Qualifications
Core Technical Skills
Strong proficiency in R programming, including tidyverse ecosystem (dplyr, ggplot2, tidyr, purr, ellmer, vitals, ragnar)
OR strong Python skills (pandas, scikit-learn) with willingness to become proficient in R
R Shiny development experience
Statistical methods: Applied experience with regression, clustering, factor analysis, conjoint/MaxDiff, or similar techniques
Data manipulation: Comfortable working with complex survey data, messy datasets, weighting schemes
Some programming experience beyond scripting: You've built something-a dashboard, an analysis pipeline, a tool-not just run one-off analyses
AI Experience
Evidence of hands-on experimentation with AI tools beyond consumer use.
Built a small project using an LLM API (even if just a personal experiment)
Created a prototype chatbot or AI-enhanced tool
Worked through tutorials and actually implemented something with OpenAI/Anthropic/Google APIs
Experimented with prompt engineering in a programmatic way
Contributed to an AI-related project at work or school
Foundation & Experience
Master's degree in Statistics, Data Science, Biostatistics, Economics, or related quantitative field (PhD welcomed but not required)
2-4 years of experience in applied statistics, data science, or market research analytics
Experience with survey research methodologies preferred
Strong problem-solving skills and attention to detail
Excellent communication skills-ability to explain technical concepts to non-technical audiences
Comfort juggling multiple projects and shifting priorities
Nice-to-Haves (Valued but Not Required)
Domain & Methodology
Experience in pharmaceutical or healthcare market research
Understanding of survey design, sampling, and weighting approaches
Familiarity with FDA regulations or pharmaceutical compliance considerations
Background in segmentation, conjoint analysis, or MaxDiff studies
Technical & Tools
R package development or contribution to open-source projects
Familiarity with Azure, Google Cloud, or AWS
Python + R bilingual coding
Exposure to Bayesian methods or probabilistic models
Any experience with vector databases, RAG systems, or advanced AI architectures (rare but great if you have it)
Experience translating SPSS workflows to R
Learning & Development
Work directly with Senior Statisticians leading our AI initiatives
Exposure to diverse therapeutic areas across major pharmaceutical clients
Hands-on experience building production AI tools in a regulated industry
Professional development in both traditional research methods and emerging AI capabilities
Our Working Environment
You'll work directly with the Senior Statisticians and collaborate with:
Project teams serving major pharmaceutical clients
Data science colleagues across AMG
Qualitative researchers and strategic consultants
We value intellectual curiosity, pragmatic problem-solving, and a bias toward action. This is a role for someone who's equally comfortable running a factor analysis and experimenting with an LLM API-who sees the value in both rigorous statistical methods and emerging AI capabilities.
Tech Stack: R (tidyverse, Shiny, tidymodels), Azure OpenAI APIs, Google Gemini APIs, Posit Connect. Open to strong Python developers willing to become R-proficient.
To Apply: Please include any code samples, GitHub links, or portfolio pieces that demonstrate your analytical work. If you've built anything with AI/LLMs (even personal projects or experiments), we'd love to see it.
The range below represents the low and high end of the base salary someone in this role may earn as an employee of an Omnicom Health Group company in the United States. Salaries will vary based on various factors including but not limited to professional and academic experience, training, associated responsibilities, and other business and organizational needs. The range listed is just one component of our total compensation package for employees. Salary decisions are dependent on the circumstances of each hire.
$90,000 - $110,000
Omnicom Health is committed to hiring and developing exceptional talent. We agree that talent is uniquely distributed, and we're focused on developing inclusive teams that can bring the best solutions to everything we do. We strongly believe that celebrating what makes us different makes us better together. Join us-we look forward to getting to know you. We will process your personal data in accordance with our
Recruitment Privacy Notice
.
Auto-ApplySr Pr Eng Data Engineering
Data engineer job in Spring House, PA
At Johnson & Johnson, we believe health is everything. Our strength in healthcare innovation empowers us to build a world where complex diseases are prevented, treated, and cured, where treatments are smarter and less invasive, and solutions are personal. Through our expertise in Innovative Medicine and MedTech, we are uniquely positioned to innovate across the full spectrum of healthcare solutions today to deliver the breakthroughs of tomorrow, and profoundly impact health for humanity. Learn more at *******************
Job Function:
Data Analytics & Computational Sciences
Job Sub Function:
Data Engineering
Job Category:
Scientific/Technology
All Job Posting Locations:
Cambridge, Massachusetts, United States of America, Spring House, Pennsylvania, United States of America
Job Description:
Data Lake Engineer and Solution Architect, R&D Therapeutics Discovery
At Johnson & Johnson, we believe health is everything. Our strength in healthcare innovation empowers us to build a world where complex diseases are prevented, treated, and cured, where treatments are smarter and less invasive, and solutions are personal. Through our expertise in Innovative Medicine and MedTech, we are uniquely positioned to innovate across the full spectrum of healthcare solutions today to deliver the breakthroughs of tomorrow and profoundly impact health for humanity. Learn more at *******************.
About Innovative Medicine
Our expertise in Innovative Medicine is informed and inspired by patients, whose insights fuel our science-based advancements. Visionaries like you work on teams that save lives by developing the medicines of tomorrow.
Join us in developing treatments, finding cures, and pioneering the path from lab to life while championing patients every step of the way.
Learn more at *******************/innovative-medicine
We are searching for the best talent for Data Lake Engineer and Solution Architect, R&D Therapeutics Discovery in Spring House, PA or Beerse, Belgium.
The Data Lake Engineer and Solution Architect is responsible for designing, optimizing, and operationalizing the data lake to serve high-dimensional biology teams, including High-Content Imaging, High-Throughput Transcriptomics, High-Throughput Proteomics among others. The candidate will optimize data models for high-dimensional biology data teams, make high-dimensional data AI/ML ready, tune storage and query performance for large-scale combined analyses across high-dimensional modalities, and deliver a standardized API for programmatic access.
What You'll Do
Design scalable data models and optimize schemas for high-dimensional biological data.
Architect and tune data lakes for performance and cost efficiency.
Develop standardized APIs and SDKs for secure, streamlined data access.
Collaborate with scientific teams and vendors to deliver platform capabilities.
Maintain documentation and train users on best practices.
Implement governance, security, and compliance frameworks.
What We're Looking For
Degree in Computer Science, Data Engineering, Bioinformatics, or related field; advanced degree (MS/PhD) preferred.
7+ years in data/platform engineering, including 3+ years with data lakes.
Experience with biological data (omics, imaging) and analytic workflows.
Hands-on expertise with Snowflake, SQL at scale, and cloud platforms.
Strong programming and scripting skills (Python, SQL), and pipeline orchestration tools.
Proven ability to design APIs and communicate technical trade-offs effectively.
Core Expertise
Data modeling and schema optimization.
Performance tuning for data lakes and queries.
API development and secure data access.
Governance, lineage, and metadata management.
Cloud-based data platforms and orchestration tools.
Programming in Python and SQL.
Preferred Qualifications
Familiarity with ML infrastructure and feature stores.
Advanced Snowflake optimization and cost-control strategies.
Knowledge of data catalog tools and metadata standards.
Experience with containerization and CI/CD for data pipelines.
Background in omics or high-dimensional imaging pipelines.
Required Skills:
Preferred Skills:
Advanced Analytics, Agility Jumps, Consulting, Continuous Improvement, Critical Thinking, Data Engineering, Data Governance, Data Modeling, Data Privacy Standards, Data Science, Execution Focus, Hybrid Clouds, Mentorship, Tactical Planning, Technical Credibility, Technical Development, Technical Writing, Technologically Savvy
The anticipated base pay range for this position is :
.
Additional Description for Pay Transparency:
.
Auto-ApplyInformation/Data Architect
Data engineer job in Reading, PA
Qualifications Information/Data Architect Reading, PA 6 Months. Information/Data Architects are responsible for assisting in setting up the overall solution environment that solves the business problem from a data or information perspective. The information should have a basic understanding of various data and solution architectures that include leveraging structured and unstructured data.
The individual must be familiar with the following terms and concepts:
Master data management
Metadata management
Data quality
Business intelligence/data warehousing
Data interoperability
Analytics
Data integration and aggregation
The Information Architect should be able to assist in the following:
Set information architecture standards and methodologies.
Identifies the information that the Institute produces and consumes.
Creates the strategic requirements, principles, models and designs for information across the ecosystem.
Assists in the identification of user requirements
Defines the information architecture of the solution
Prepares data models, designing information structure, work-and data flow
The preferred candidate should be familiar with and have experience in developing data models.
Specific to this opportunity, the individual must be familiar with the following technologies:
Pivotal Cloud Foundry
Pivotal Big Data Suite
Data Lakes and other Big Data Concepts with proven experience in using data lakes
Hadoop (Cloudera).
Additional Information
All your information will be kept confidential according to EEO guidelines.
AI Automation Developer/Snowflake Data warehouse Developer
Data engineer job in Lansdale, PA
Title: AI Automation Developer\/Snowflake Data warehouse Developer Job Type: Contract\-to\-Hire Location: Philadelphia Region (Lansdale PA) Authorization: U.S. Citizen or Green Card Required Recruiting Partner: Tri\-Force Consulting Services Note: This is a hybrid role. Summary
Seeking a Snowflake + AI\/Automation Lead to build data pipelines, intelligent workflows, Power Platform apps, and analytics dashboards.
Requirements
Snowflake architecture & SQL
PowerApps \/ Power Automate \/ Power BI
AI integration experience preferred
Must be local to Greater Philadelphia
Must be U.S. Citizen or Permanent Resident (Green Card holder)
If you are: bright, motivated, skilled, a difference\-maker, able to get things done, work with minimum direction, enthusiastic, a thinker, able to juggle and multi\-task, communicate effectively, and lead, then we would like to hear from you. We need exceptionally capable people for this role for our client, so get back to us and tell us why
you think you are a fit.
About Us:
Since 2000, Tri\-Force Consulting Services (https:\/\/triforce\-inc.com) has been an MBE\/SDB certified IT Consulting firm in the Philadelphia region. Tri\-Force specializes in IT staffing, software development (web and mobile apps), systems integration, data analytics, system automation, cybersecurity, and cloud technology solutions for government and commercial clients. Tri\-Force works with clients to overcome obstacles such as increasing productivity, increasing efficiencies through automation, and lowering costs. Our clients benefit from our three distinguishing core values: integrity, diligence, and technological excellence. Tri\-Force is a six\-time winner among the fastest\-growing companies in Philadelphia and a four\-time winner on the Inc. 5000 list of the nation's fastest\-growing companies.
"}}],"is Mobile":false,"iframe":"true","job Type":"Full time","apply Name":"Apply Now","zsoid":"668268639","FontFamily":"PuviRegular","job OtherDetails":[{"field Label":"Industry","uitype":2,"value":"IT Services"},{"field Label":"Work Experience","uitype":2,"value":"5+ years"},{"field Label":"Salary","uitype":1,"value":"NA"},{"field Label":"City","uitype":1,"value":"Lansdale"},{"field Label":"State\/Province","uitype":1,"value":"Lansdale, PA"},{"field Label":"Zip\/Postal Code","uitype":1,"value":"19019"}],"header Name":"AI Automation Developer\/Snowflake Data warehouse Developer","widget Id":"**********00072311","awli IntegId":"urn:li:organization:1324067","is JobBoard":"false","user Id":"**********00209003","attach Arr":[],"awli ApiKey":"771tbiwd9tzd3x","custom Template":"2","awli HashKey":"2698bcee6d3fc037e951f9279e5e3997478fdcd828759eabf05d376bafdf4540682e183319526d929ef7a1bfd1c1c8943e811e2a76a406f21353833f4a24f5a6","is CandidateLoginEnabled":true,"job Id":"**********26087228","FontSize":"15","google IndexUrl":"https:\/\/triforce\-inc.zohorecruit.com\/recruit\/ViewJob.na?digest=tm QIbOnWflDZ0S2pElxhAsSNrVhKpx5VMZvr9YyzkPw\-&embedsource=Google","location":"Lansdale","embedsource":"CareerSite","indeed CallBackUrl":"https:\/\/recruit.zoho.com\/recruit\/JBApplyAuth.do","logo Id":"74n94aca1a8df4ad7413cac341f372706a17a"}
Skype for Business L3 Engineer
Data engineer job in Raritan, NJ
Skype for Business Engineer The Skype for Business Engineer will provide the engineering and third level of support for all Skype for Business platform and related services. A successful candidate for this position will be able to demonstrate deep technical skills, proficiency with the design, implementation and support of Microsoft Skype for Business. Analyze, architect, design and deliver solutions on the following Skype for Business platform components. This will involve prototyping, solutioning, blueprints, road-mapping, technical delivery and syndication with multiple stakeholders and peer technical groups through detailed documentation & presentations. Architecture and engineering of Microsoft Skype for Business Server, providing guidance and deep technical knowledge on various Skype for Business architecture elements deployed in an Enterprise environment
Job Description
* Must have experience with either Skype for Business or Lync
* Experience with SfB Enterprise Voice
* Experience deploying and configuring Lync/Skype for Business audio, video and telephony solutions as well as integration with Exchange/Office 365 Unified Messaging
* Experience integrating Skype for Business with other unified communications systems (chat, media, video, voice)
* Excellent written, organizational, and verbal communication skills as well as collaboration skills. Able to convey technical concepts to individuals without prior technical knowledge or understanding of topics.
* Able to coordinate and collaborate with business application stakeholders and infrastructure teams to ensure successful upgrades, installations or integration activities
* Microsoft Exchange Unified Messaging
* Fault-tolerant and Highly Available Architecture
* Worked with SfB logging tools and in depth understanding of SIP and SfB logs.
* Experience with Polycom Phones
* Experience working with Voice Gateways (example: Audiocodes, Sonus, Oracle)
* Experience working with Analog devices and gateways.
* Ability to perform client testing, server patching and deployment assistance.
Experience with/and Troubleshooting Knowledge Of
* Mail flow troubleshooting- OWA, ActiveSync, OAB, Internal mail flow, and other Exchange related services.
* Network troubleshooting- Basic connectivity testing and some firewall knowledge
* Backup knowledge- How to complete and troubleshooting various backups from DPM to Snapshots.
* SAN knowledge- Manage disk space and troubleshoot
* Active Directory- Troubleshoot Domain Controllers, replication and some user accounts.
Job Qualifications
* Previous experience as Junior Network or Voice Administrator.
* Ability to read Wireshark or Netmon Packet Capture.
* Experience with Office 365 Experience a plus.
* Strong analytical skills; able to assess and solve issues in a high-pressure environment
* Skills and proven experience with following products (in priority order):
* Skype for Business Server
* Lync Server 2013
* Windows Server 2008/2012
* Microsoft SQL
* VoiP, SIP and SIP trunking (Centralized, Local)
*
* Network Architecture and QoS (optimized for real-time voice/video)
Education
* 3 5 years of relevant experience with specified infrastructure and IT technologies.
* A Bachelor's Degree in computer science, computer engineering, management information systems, information technology or a similar field.
* An equivalent combination of education and experience may substitute for a degree.
* Certifications are a plus
Software Engineer, iOS Core Product - Allentown, USA
Data engineer job in Allentown, PA
The mission of Speechify is to make sure that reading is never a barrier to learning.
Over 50 million people use Speechify's text-to-speech products to turn whatever they're reading - PDFs, books, Google Docs, news articles, websites - into audio, so they can read faster, read more, and remember more. Speechify's text-to-speech reading products include its iOS app, Android App, Mac App, Chrome Extension, and Web App. Google recently named Speechify the Chrome Extension of the Year and Apple named Speechify its App of the Day.
Today, nearly 200 people around the globe work on Speechify in a 100% distributed setting - Speechify has no office. These include frontend and backend engineers, AI research scientists, and others from Amazon, Microsoft, and Google, leading PhD programs like Stanford, high growth startups like Stripe, Vercel, Bolt, and many founders of their own companies.
Overview
With the growth of our iOS app, being the #18 productivity app in the App Store category and also our recent recognition as Apple's 2025 Design Award for Inclusivity, we find the need for a Senior iOS Engineer to help us support the new user base as well as work on new and exciting projects to push our missing forward.
This is a key role and ideal for someone who thinks strategically, enjoys fast-paced environments, passionate about making product decisions, and has experience building great user experiences that delight users.
We are a flat organization that allows anyone to become a leader by showing excellent technical skills and delivering results consistently and fast. Work ethic, solid communication skills, and obsession with winning are paramount.
Our interview process involves several technical interviews and we aim to complete them within 1 week.
What You'll Do
Opportunity to lead key engineering and product decisions
Actively shipping production code for the Speechify iOS app
Work within a dedicated product team
Participate in product discussions to shape the product roadmap
Maintain and enhance the existing complex app architecture
An Ideal Candidate Should Have
Experience. You've worked on products that scaled to a large user base
Track record. You have worked on various products from inception to decent traction. You have been responsible for engineering the product
Customer obsession. We expect every team member whose responsibilities directly impact customers to be constantly obsessed about providing the best possible experience
Product thinking. You make thoughtful decisions about the evolution of your product and support internal teams and designers into taking the right direction
Speed. You work quickly to generate ideas and know how to decide which things can ship now and what things need time
Focus. We're a high-growth startup with a busy, remote team. You know how and when to engage or be heads down
Technical skills. Swift, SwiftUI
Technical Requirements:
Swift Programming Language
SwiftUI experience
Experience in Multithreading Programming
Working with CI/CD infrastructure
Experience with Fastlane
SOLID principles, the ability to write every single class according to SOLID
Experience with Git and understanding of different Git strategies
What We offer:
A fast-growing environment where you can help shape the company and product
An entrepreneurial crew that supports risk, intuition, and hustle
The opportunity to make a big impact in a transformative industry
A competitive salary, a collegiate atmosphere, and a commitment to building a great asynchronous culture
Work on a product that millions of people use and where daily feedback includes users sharing that they cried when they first found the product because it was so impactful on their lives
Support people with learning differences like Dyslexia, ADD, Low Vision, Concussions, Autism, and Second Language Learners, and give reading superpowers to professionals all over the world
Work in one of the fastest growing sectors of tech: Intersection of Artificial Intelligence and Audio
The United States Based Salary range for this role is: 140,000-200,000 USD/Year + Bonus + Stock depending on experience
Think you're a good fit for this job?
Tell us more about yourself and why you're interested in the role when you apply.
And don't forget to include links to your portfolio and LinkedIn.
Not looking but know someone who would make a great fit?
Refer them!
Speechify is committed to a diverse and inclusive workplace.
Speechify does not discriminate on the basis of race, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or other legally protected status.
Auto-ApplyQlik Data Engineer
Data engineer job in Easton, PA
is NOT eligible for visa sponsorship. This role will specialize in building comprehensive data pipeline development and management, enabling our current Business Intelligence team to focus on analytics and business value while ensuring robust, scalable data integration solutions.
Background and Current State
Our Business Intelligence team currently operates as a multi-disciplinary unit managing the complete data lifecycle from ingestion to visualization. The current structure requires our BI professionals to wear many hats, handling responsibilities that span data engineering, ETL/ELT development, data modeling, report creation, dashboard development, and business relationship management. While this approach has served us well in establishing our data capabilities, the increasing complexity of our data ecosystem and growing business demands have created capacity constraints and specialization challenges.
Our data integration landscape has evolved significantly with the adoption of Qlik Data Integration and Qlik Talend Cloud Enterprise Edition. The current team's broad responsibilities limit the depth of specialization possible in any single area, particularly in the technical aspects of modern real-time data integration and the advanced features available in Qlik Talend Cloud Enterprise Edition. As our organization increasingly requires real-time analytics, operational reporting, and seamless data movement across hybrid cloud environments, we need dedicated expertise to ensure our Qlik platform delivers optimal performance and business value.
Primary Job Responsibilities
Data Integration Architecture and Engineering
* Develop and maintain ETL/ELT data pipelines leveraging Qlik Data Integration for data warehouse generation in bronze, silver, gold layers
* Build consumer facing datamarts, views, and push-down calculations to enable improved analytics by BI team and Citizen Developers
* Implement enterprise data integration patterns supporting batch, real-time, and hybrid processing requirements
* Coordinate execution of and monitor pipelines to ensure timely reload of EDW
Technical Implementation and Platform Management
* Configure and manage Qlik Data Integration components including pipeline projects, lineage, data catalog, data quality, and data marketplace
* Implement data quality rules and monitoring using Qlik and Talend tools
* Manage Qlik Tenant, security, access and manage Data Movement Gate way
Performance, Monitoring, Governance and Management
* Monitor and optimize data replication performance, latency, and throughput across all integration points
* Implement comprehensive logging, alerting, and performance monitoring
* Conduct regular performance audits and capacity planning for integration infrastructure
* Establish SLA monitoring and automated recovery procedures for critical data flows
Collaboration and Enterprise Support
* Provide technical expertise on Qlik Data Integration best practices and enterprise patterns
* Support database administrators and infrastructure teams with replication and integration architecture
* Lead technical discussions on data strategy and platform roadmap decisions
Key Qualifications
Required Skills
* Bachelor's degree in Computer Science, Information Systems, or related technical field
* 4+ years of experience in enterprise data integration with at least 2 years of hands-on Qlik or Talend experience
* Strong understanding of change data capture (CDC) technologies and real-time data streaming concepts
* Strong understanding of data lake and data warehouse strategies, and data modelling
* Advanced SQL skills with expertise in database replication, synchronization, and performance tuning
* Experience with enterprise ETL/ELT tools and data integration patterns
* Proficiency in at least one programming language (Java, Python, or SQL scripting)
Preferred Qualifications
* Qlik Data Integration certification or Talend certification (Data Integration, Data Quality, or Big Data)
* Experience with cloud platforms (AWS or Azure) and hybrid integration scenarios
* Experience with Snowflake preferred
* Understanding of data governance frameworks and regulatory compliance requirements
* Experience with API management and microservices architecture
Soft Skills
* Strong analytical and troubleshooting capabilities with complex integration challenges
* Excellent communication skills with ability to explain technical integration concepts to business stakeholders
* Collaborative approach with experience in cross-functional enterprise teams
* Detail-oriented mindset with commitment to data accuracy and system reliability
* Adaptability to evolving integration requirements and emerging technologies
This position is NOT eligible for visa sponsorship.
EEO Statement: Victaulic is an Equal Employment Opportunity (EOE/M/F/Vets/Disabled) employer and welcomes all qualified applicants. Applicants will receive fair and impartial consideration without regard to race, gender, color, religion, national origin, age, disability, veteran status, sexual orientation, genetic data, or other legally protected status
#TOP123
#LI-KM1
#LI-REMOTE
Victaulic Staffing Partner Communication Policy
All staffing agencies are strictly forbidden from directly contacting any Victaulic employees, except those within the Human Resources/Talent Acquisition team. All communications, inquiries and candidate submissions must be routed through Victaulic's Human Resources/Talent Acquisition team. Non-compliance with this policy may result in the suspension of partnership, cancellation of the current contract, and/or the imposition of a mandatory probation period before any future business can resume. Additionally, non-compliance may lead to a permanent ban on future business. This policy ensures a streamlined and compliant recruitment process.
Auto-ApplyAssociate Data Scientist
Data engineer job in Reading, PA
This role will help in the development of predictive analytics solutions. The Associate Data Scientist will work on 1-2 projects concurrently under the supervision of senior members of the Data Science team. The projects at this level will vary from small and straightforward to large and complex. Similarly, impact can range from team/department level up to organization-wide. The Associate Data Scientist provides analysis and assists in developing machine-learning models to solve business problems. Such efforts require cross-functional involvement from various disciplines across the organization.
Major Responsibilities:
Data evaluation and analysis:
* Assist in identifying appropriate data sources to answer business question
* Extract, blend, cleanse, and organize data
* Visualize data
* Identify (and ameliorate) outliers and missing or incomplete records in the data
Model building and implementation
* Participate in the identification of appropriate techniques and algorithms for building model
* Create and test model
* Build application and embed model
Communication
* Engage in best practices discussions with other team members regarding modeling activities
* Discuss project activities and results with team
* Help in the documentation and presentation of project stories
Identify opportunities for savings and process improvement:
* Collaborate with various stakeholders to identify business problems
* Help conduct ROI analysis to determine project feasibility
* Help build business case to solve such business problems
* Other projects as assigned by the supervisor
Qualifications:
* Masters degree required, concentration in Engineering, Operations Research, Statistics, Applied Math, Computer Science, or related quantitative field preferred
* 1+ years experience along with a Master's degree in data or business analytics
* 1 year of experience designing and building machine learning applications using either structured or unstructured datasets is required. Practical experience programming using Python, R or other high level scripting languages is required. Demonstrated experience with one or more machine learning techniques including logistic regression, decision trees, random forests and clustering is required. Knowledge and experience with SQL is preferred.
* Experience in the following areas:
* Machine learning (intermediate)
* Statistical modeling (elementary)
* Supervised learning (intermediate)
* Statistical computing packages (e.g., R) (elementary)
* Scripting languages (e.g., Python) (intermediate)
* Ability to collect and analyze complex data
* Must be able to translate data insights into action
* Must be able to organize and prioritize work to meet multiple deadlines
* Must be able to communicate effectively in both oral and written form
* Strong communication skills required
* Must have strong time management skills
* Regular, predictable, full attendance is an essential function of the job
* Willingness to travel as necessary, work the required schedule, work at the specific location required, complete Penske employment application, submit to a background investigation (to include past employment, education, and criminal history) and drug screening are required.
Physical Requirements:
* The physical and mental demands described here are representative of those that must be met by an associate to successfully perform the essential functions of this job. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions.
* The associate will be required to: read; communicate verbally and/or in written form; remember and analyze certain information; and remember and understand certain instructions or guidelines.
* While performing the duties of this job, the associate may be required to stand, walk, and sit. The associate is frequently required to use hands to touch, handle, and feel, and to reach with hands and arms. The associate must be able to occasionally lift and/or move up to 25lbs/12kg.
* Specific vision abilities required by this job include close vision, distance vision, peripheral vision, depth perception and the ability to adjust focus.
Penske is an Equal Opportunity Employer.
About Penske Truck Leasing/Transportation Solutions
Penske Truck Leasing/Transportation Solutions is a premier global transportation provider that delivers essential and innovative transportation, logistics and technology services to help companies and people move forward. With headquarters in Reading, PA, Penske and its associates are driven by a dedication to excellence and a commitment to customer success. Visit Go Penske to learn more.
Job Category: Information Technology
Job Family: Analytics & Intelligence
Address: 100 Kachel Boulevard
Primary Location: US-PA-Reading
Employer: Penske Truck Leasing Co., L.P.
Req ID: 2512517
Lead Data Insights Engineer - MedTech Surgery
Data engineer job in Raritan, NJ
At Johnson & Johnson, we believe health is everything. Our strength in healthcare innovation empowers us to build a world where complex diseases are prevented, treated, and cured, where treatments are smarter and less invasive, and solutions are personal. Through our expertise in Innovative Medicine and MedTech, we are uniquely positioned to innovate across the full spectrum of healthcare solutions today to deliver the breakthroughs of tomorrow, and profoundly impact health for humanity. Learn more at *******************
Job Function:
Data Analytics & Computational Sciences
Job Sub Function:
Data Engineering
Job Category:
Scientific/Technology
All Job Posting Locations:
Cincinnati, Ohio, United States of America, Raritan, New Jersey, United States of America
Job Description:
We are searching for the best talent for a Lead Data Insights Engineer. The preferred location for this role is Raritan, NJ however candidates within a commutable distance of Cincinnati, OH will also be considered. This role will work a Flex/Hybrid schedule with 3 days per week in office. There is NO remote option. Please Note: Relocation Assistance is not available for this role.
Purpose: The Lead Data Insights Engineer will play a pivotal role & be responsible for architecting, developing, & maintaining the Gold Layer ETL processes that transform raw data into actionable business insights within our intelligence ecosystem. This role orchestrates complex end-to-end data pipelines, ensuring seamless integration between upstream Bronze/Silver layers & downstream analytics, while enforcing data governance, lineage, & scalability. The engineer will design & optimize business bays, defining data sources, transformation rules, & dynamic aggregation logic to support granular reporting. A key focus is collaborating with the "Center" Data Strategy team to align upstream processes, minimize technical debt, & future-proof. Additionally, the role will partner with business stakeholders (Marketing, Sales, Key Account Management) to co-prioritize data needs, build insights, & drive adoption of self-service analytics platforms. The engineer will partner with Data Science to innovate with AI/ML automation for data governance, work closely with IT to deploy scalable solutions, & ensure compliance with regulatory standards. By staying ahead of cloud analytics & AI trends, this role champions automation, eliminates manual inefficiencies, & enhances the intelligence ecosystem's performance, ultimately enabling strategy & insights-driven decision making & commercial execution across the organization.
You will be responsible for:
Data Management & ETL Orchestration:
* Direct & innovate the end-to-end orchestration of the Gold layer ETL processes, owning complete Data Cleansing, Validation, Standardization, Enrichment, Aggregation & Formatting, ensuring performance, security, & scalability.
* Build & maintain individual business bays with necessary data tables & business rules, logic, & mappings to transform raw data into dynamic insights.
* Apply advanced data management techniques to aggregate & disaggregate insights dynamically, facilitating where possible granular levels of reporting.
* Clearly document & organize data flow mapping, ensuring data lineage, storage, process, dependencies, integrity & governance are maintained throughout the transformation process.
Build, Foster & Maintain Key Partnerships:
* Collaborate with the "Center" Data Strategy to identify data gaps, understand existing Enterprise Data Lakes' Bronze & Silver layer ETL processes & leverage extractions from upstream data tables, proactively future proofing to minimize maintenance efforts.
* Engage the Business (Marketing, Sales, Key Account Management, Digital) to prioritize & define sources, requirements, rules, logic, & mapping for transforming raw data into dynamic insights, while providing fresh perspective of what else is possible.
* Innovate with Data Science in ways to automate Data Governance of Business Mappings through AI/ML.
* Co-shape with IT loading requirements for dynamic insights into a self-service platform & Commercial Intelligence Database.
* Drive user acceptance & adoption in the utilization of the source of truth for Commercial Intelligence, Strategic Targeting, & Data Products Ecosystem (analytics, dashboards, & reports).
* Communicate & cooperate with Legal, Privacy, & Compliance partners in alignment of all insights.
Consistent Learning & Innovation Champion:
* Stay ahead of industry trends in data engineering, cloud analytics, and AI-driven insights.
* Drive best practices in data management & recommend fresh approaches to enhance data solutions.
* Develop business knowledge and understanding of JJMT processes and capabilities specific to market, competitive, and procedure volume datasets.
* Champion automation & foster a cooperative environment conducive to innovation.
* Responsible for ensuring personal & Company compliance with all Federal, State, local & Company regulations, policies, & procedures, as well as upholding all values of the J&J Credo.
Qualifications / Requirements:
* Minimum of a Bachelor's Degree is required, Advanced Degree strongly preferred. Statistics, Mathematics, Data Science, Computer Science or related field is required.
* At least 3-5 years of post-academic work/industry experience is required; medical technology or healthcare field is highly desired.
* Proficiency in Data Modeling (Azure, Data Factory, Databricks, Purview) is required.
* Proficiency in ETL orchestration (Azure DevOps, CI/CD Pipelines) is required.
* Proficiency in complex data transformation (SQL, Alteryx, Python, SAS, R) is required.
* Experience in data flow mapping & advanced data visualization tools (Power BI, Tableau) is required.
* Advanced Excel, PowerPoint and strong data management skills are required.
* Understanding Data Models, MDM, Data Governance frameworks, integrating diverse data sources & building self-service analytics platforms is required.
* Attention to detail and with extensive experience in aggregating data and formulating insights is required.
* Experience with Microsoft Azure, Fabric, OneLake & PowerBI is highly preferred.
* Experience in Medical Technology is highly preferred.
* Proven leadership ability & driver of cross-functional collaboration is required.
* Excellent verbal & written communication skills, particularly in presenting technical information to non-technical stakeholders, is required.
* Effective project management skills with experience managing multiple projects simultaneously is required.
* Strong intuition to proactively identify current problems & gaps to effectively minimize future maintenance is required.
* Results oriented, a strong demonstration of translating business strategy into business insights is highly preferred.
* Experience with Commercial systems (Compensation, Sales Analytics, Contracting systems) is preferred.
Johnson & Johnson is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, age, national origin, disability, protected veteran status or other characteristics protected by federal, state or local law. We actively seek qualified candidates who are protected veterans and individuals with disabilities as defined under VEVRAA and Section 503 of the Rehabilitation Act.
Johnson & Johnson is committed to providing an interview process that is inclusive of our applicants' needs. If you are an individual with a disability and would like to request an accommodation, please contact us via *******************/contact-us/careers or contact AskGS to be directed to your accommodation resource.
#LI-Hybrid
Required Skills:
Preferred Skills:
The anticipated base pay range for this position is :
The base pay range for this position is $91,000 to $147,200.
Additional Description for Pay Transparency:
Subject to the terms of their respective plans, employees and/or eligible dependents are eligible to participate in the following Company sponsored employee benefit programs: medical, dental, vision, life insurance, short- and long-term disability, business accident insurance, and group legal insurance. Subject to the terms of their respective plans, employees are eligible to participate in the Company's consolidated retirement plan (pension) and savings plan (401(k)). Subject to the terms of their respective policies and date of hire, Employees are eligible for the following time off benefits: Vacation -120 hours per calendar year Sick time - 40 hours per calendar year; for employees who reside in the State of Washington -56 hours per calendar year Holiday pay, including Floating Holidays -13 days per calendar year Work, Personal and Family Time - up to 40 hours per calendar year Parental Leave - 480 hours within one year of the birth/adoption/foster care of a child Condolence Leave - 30 days for an immediate family member: 5 days for an extended family member Caregiver Leave - 10 days Volunteer Leave - 4 days Military Spouse Time-Off - 80 hours Additional information can be found through the link below. *********************************************
Auto-ApplyInformation/Data Architect
Data engineer job in Reading, PA
Qualifications
Information/Data Architect
Reading, PA
6 Months.
Information/Data Architects are responsible for assisting in setting up the overall solution environment that solves the business problem from a data or information perspective. The information should have a basic understanding of various data and solution architectures that include leveraging structured and unstructured data.
The individual must be familiar with the following terms and concepts:
Master data management
Metadata management
Data quality
Business intelligence/data warehousing
Data interoperability
Analytics
Data integration and aggregation
The Information Architect should be able to assist in the following:
Set information architecture standards and methodologies.
Identifies the information that the Institute produces and consumes.
Creates the strategic requirements, principles, models and designs for information across the ecosystem.
Assists in the identification of user requirements
Defines the information architecture of the solution
Prepares data models, designing information structure, work-and data flow
The preferred candidate should be familiar with and have experience in developing data models.
Specific to this opportunity, the individual must be familiar with the following technologies:
Pivotal Cloud Foundry
Pivotal Big Data Suite
Data Lakes and other Big Data Concepts with proven experience in using data lakes
Hadoop (Cloudera).
Additional Information
All your information will be kept confidential according to EEO guidelines.
Senior Laboratory Data Automation Architect, R&D Therapeutics Discovery
Data engineer job in Spring House, PA
At Johnson & Johnson, we believe health is everything. Our strength in healthcare innovation empowers us to build a world where complex diseases are prevented, treated, and cured, where treatments are smarter and less invasive, and solutions are personal. Through our expertise in Innovative Medicine and MedTech, we are uniquely positioned to innovate across the full spectrum of healthcare solutions today to deliver the breakthroughs of tomorrow, and profoundly impact health for humanity. Learn more at *******************
Job Function:
R&D Operations
Job Sub Function:
Laboratory Operations
Job Category:
Professional
All Job Posting Locations:
Spring House, Pennsylvania, United States of America
Job Description:
About Innovative Medicine
Our expertise in Innovative Medicine is informed and inspired by patients, whose insights fuel our science-based advancements. Visionaries like you work on teams that save lives by developing the medicines of tomorrow.
Join us in developing treatments, finding cures, and pioneering the path from lab to life while championing patients every step of the way.
Learn more at *******************/innovative-medicine
We are searching for the best talent for a Senior Laboratory Data Automation Architect, R&D Therapeutics Discovery to be located in Spring House, PA.
Purpose: The Laboratory Data Automation Architect acts as the keystone of automation projects, ensuring that scientific workflows, hardware deployment, software development, and systems integration are all aligned, defined, and interoperable within Johnson & Johnson's laboratory infrastructures. This role combines scientific acumen with expert-level capabilities in automation design, platform management, and cross-functional collaboration to deliver integrated, scalable, and compliant automation solutions across modality agnostic Therapeutics Discovery.
You will be responsible for:
Architectural leadership
Designing and governing scalable, robust automated scientific workflows that span hardware, software, data, and networking layers.
Defining critical data entities and data flows to enable seamless transitions between disparate components and future-ready extensions.
Establishing and promoting best methods for API/SDK usage, data models, and integration standards across internal and external tools.
Program and project execution
Leading end-to-end automation initiatives from concept through deployment, validation, and ongoing optimization.
Aligning projects with strategic objectives, timelines, budgets, and risk management in a global, matrixed environment.
Defining and maintaining platform roadmaps, governance, change control, and documentation.
Hardware and software integration
Identifying and selecting instrumentation and software solutions based on scientific merit and ease of integration.
Capably orchestrating the integration of lab instrumentation, high-throughput systems, robotics, ELN/LIMS, MES/OT, and data analytics platforms.
Overseeing system integration, including scheduling software, middleware, APIs, and data exchange protocols.
Data management, security, and compliance
Ensuring data quality, lineage, traceability, security, and compliance (IT/OT security and relevant standards).
Implementing proactive monitoring, diagnostic tools, and maintenance regimes to minimize downtime and risk.
Developing runbooks, validation plans, and documentation to support audits and inspections.
Vendor and ecosystem engagement
Engaging with hardware and software vendors to define standards, roadmaps, and interoperable solutions.
Communicating industry standards to drive external development toward aligning with J&J needs and the broader industry.
Leadership and collaboration
Leading and developing global, cross-functional teams (engineering, IT, data science, research, and operations) in a matrix setting.
Fostering an automation-centered culture, supporting internal & external staff, and promoting continuous learning.
Building positive relationships with internal collaborators and external partners to accelerate impact.
Innovation and capability building
Staying current with emerging automation technologies, data architectures, and analytics methods.
Exploring opportunities in cloud, edge, AI/ML, digital twin, and advanced analytics to enhance scientific workflows.
Documentation and knowledge management
Maintaining comprehensive architecture diagrams, specifications, standards, and training materials.
Ensuring available, accessible documentation for users, operators, and support teams.
Qualifications / Requirements:
Education:
Master's degree in Engineering, Computer Science, Life Sciences or a closely related field is required.
Experience and Skills:Required:
10+ years of experience in automation and platform management is required.
Domain Expertise: Proven track record deploying and operating advanced automation platforms in laboratory or research environments; familiarity with laboratory instrumentation and analytical techniques is required.
Systems & Integration: Broad experience with robotics, high-throughput screening, LIMS/ELN, data management systems, and end-to-end system integration (scheduling software, APIs, middleware, data exchange) is required.
Leadership: Demonstrated ability to lead global, matrixed teams and manage multiple initiatives simultaneously is required.
Communication: Excellent interpersonal and stakeholder-management skills; ability to translate complex technical concepts to diverse audiences is required.
Governance & Quality Familiarity with validation, change control, documentation, machine safety and regulatory considerations relevant to life sciences (IT/OT security) is required.
Preferred:
Industry Exposure: Experience in pharmaceutical research, biotech, or medical device environments is preferred.
Technical Breadth: Knowledge of LIMS/ELN systems, data visualization, and analytics toolsets; experience with programming languages such as Python, R, C#, or Java; scripting for automation is preferred.
Cloud & Data: Experience with cloud platforms (AWS, Azure) and data orchestration/workflow tooling is a plus.
Data Governance: Strong understanding of data integrity, lineage, security frameworks, and scalable data architectures is preferred.
Soft Skills: Strong analytical mentality, problem-solving agility, and a collaborative leadership style is highly preferred.
What Success Looks Like:
High-availability, scalable automation platforms with measurable improvements in throughput and data quality.
Clear, standardized data models and workflows that enable seamless collaboration across sites.
Reduced integration friction, lower maintenance costs, and faster time-to-delivery for scientific programs.
Strong vendor relationships and an active pipeline of external developments aligned with J&J needs.
A thriving, skilled team with clear career progression and a culture of continuous improvement.
Johnson & Johnson is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, age, national origin, disability, protected veteran status or other characteristics protected by federal, state or local law. We actively seek qualified candidates who are protected veterans and individuals with disabilities as defined under VEVRAA and Section 503 of the Rehabilitation Act.
Johnson & Johnson is committed to providing an interview process that is inclusive of our applicants' needs. If you are an individual with a disability and would like to request an accommodation, external applicants please contact us via *******************/contact-us/careers, internal employees contact AskGS to be directed to your accommodation resource.
#LI-Onsite
Required Skills:
Preferred Skills:
Critical Thinking, Data Management and Informatics, Epidemiology, Ethical and Participant Safety Considerations, Inventory Management, Laboratory Operations, Laboratory Safety, Mental Agility, Preclinical Research, Problem Solving, Process Improvements, Research and Development, Standard Operating Procedure (SOP), Strategic Thinking, Technical Credibility, Vendor Management
Auto-ApplySoftware Engineer, Platform - Allentown, USA
Data engineer job in Allentown, PA
The mission of Speechify is to make sure that reading is never a barrier to learning.
Over 50 million people use Speechify's text-to-speech products to turn whatever they're reading - PDFs, books, Google Docs, news articles, websites - into audio, so they can read faster, read more, and remember more. Speechify's text-to-speech reading products include its iOS app, Android App, Mac App, Chrome Extension, and Web App. Google recently named Speechify the Chrome Extension of the Year and Apple named Speechify its 2025 Design Award winner for Inclusivity.
Today, nearly 200 people around the globe work on Speechify in a 100% distributed setting - Speechify has no office. These include frontend and backend engineers, AI research scientists, and others from Amazon, Microsoft, and Google, leading PhD programs like Stanford, high growth startups like Stripe, Vercel, Bolt, and many founders of their own companies.
Overview
The responsibilities of our Platform team include building and maintaining all backend services, including, but not limited to, payments, analytics, subscriptions, new products, text to speech, and external APIs.
This is a key role and ideal for someone who thinks strategically, enjoys fast-paced environments, is passionate about making product decisions, and has experience building great user experiences that delight users.
We are a flat organization that allows anyone to become a leader by showing excellent technical skills and delivering results consistently and fast. Work ethic, solid communication skills, and obsession with winning are paramount.
Our interview process involves several technical interviews and we aim to complete them within 1 week.
What You'll Do
Design, develop, and maintain robust APIs including public TTS API, internal APIs like Payment, Subscription, Auth and Consumption Tracking, ensuring they meet business and scalability requirements
Oversee the full backend API landscape, enhancing and optimizing for performance and maintainability
Collaborate on B2B solutions, focusing on customization and integration needs for enterprise clients
Work closely with cross-functional teams to align backend architecture with overall product strategy and user experience
An Ideal Candidate Should Have
Proven experience in backend development: TS/Node (required)
Direct experience with GCP and knowledge of AWS, Azure, or other cloud providers
Efficiency in ideation and implementation, prioritizing tasks based on urgency and impact
Preferred: Experience with Docker and containerized deployments
Preferred: Proficiency in deploying high availability applications on Kubernetes
What We Offer
A dynamic environment where your contributions shape the company and its products
A team that values innovation, intuition, and drive
Autonomy, fostering focus and creativity
The opportunity to have a significant impact in a revolutionary industry
Competitive compensation, a welcoming atmosphere, and a commitment to an exceptional asynchronous work culture
The privilege of working on a product that changes lives, particularly for those with learning differences like dyslexia, ADD, and more
An active role at the intersection of artificial intelligence and audio - a rapidly evolving tech domain
The United States Based Salary range for this role is: 140,000-200,000 USD/Year + Bonus + Stock depending on experience
Think you're a good fit for this job?
Tell us more about yourself and why you're interested in the role when you apply.
And don't forget to include links to your portfolio and LinkedIn.
Not looking but know someone who would make a great fit?
Refer them!
Speechify is committed to a diverse and inclusive workplace.
Speechify does not discriminate on the basis of race, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or other legally protected status.
Auto-ApplyAssociate Data Scientist
Data engineer job in Reading, PA
This role will help in the development of predictive analytics solutions. The Associate Data Scientist will work on 1-2 projects concurrently under the supervision of senior members of the Data Science team. The projects at this level will vary from small and straightforward to large and complex. Similarly, impact can range from team/department level up to organization-wide. The Associate Data Scientist provides analysis and assists in developing machine-learning models to solve business problems. Such efforts require cross-functional involvement from various disciplines across the organization. Major Responsibilities:
Data evaluation and analysis: • Assist in identifying appropriate data sources to answer business question • Extract, blend, cleanse, and organize data • Visualize data • Identify (and ameliorate) outliers and missing or incomplete records in the data
Model building and implementation • Participate in the identification of appropriate techniques and algorithms for building model • Create and test model • Build application and embed model
Communication • Engage in best practices discussions with other team members regarding modeling activities • Discuss project activities and results with team • Help in the documentation and presentation of project stories
Identify opportunities for savings and process improvement: • Collaborate with various stakeholders to identify business problems • Help conduct ROI analysis to determine project feasibility • Help build business case to solve such business problems • Other projects as assigned by the supervisor
Qualifications: • Masters degree required, concentration in Engineering, Operations Research, Statistics, Applied Math, Computer Science, or related quantitative field preferred • 1+ years experience along with a Master's degree in data or business analytics • 1 year of experience designing and building machine learning applications using either structured or unstructured datasets is required. Practical experience programming using Python, R or other high level scripting languages is required. Demonstrated experience with one or more machine learning techniques including logistic regression, decision trees, random forests and clustering is required. Knowledge and experience with SQL is preferred. • Experience in the following areas: -Machine learning (intermediate) -Statistical modeling (elementary) -Supervised learning (intermediate) -Statistical computing packages (e.g., R) (elementary) -Scripting languages (e.g., Python) (intermediate) • Ability to collect and analyze complex data • Must be able to translate data insights into action • Must be able to organize and prioritize work to meet multiple deadlines • Must be able to communicate effectively in both oral and written form • Strong communication skills required • Must have strong time management skills • Regular, predictable, full attendance is an essential function of the job • Willingness to travel as necessary, work the required schedule, work at the specific location required, complete Penske employment application, submit to a background investigation (to include past employment, education, and criminal history) and drug screening are required.
Physical Requirements: -The physical and mental demands described here are representative of those that must be met by an associate to successfully perform the essential functions of this job. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions. -The associate will be required to: read; communicate verbally and/or in written form; remember and analyze certain information; and remember and understand certain instructions or guidelines. -While performing the duties of this job, the associate may be required to stand, walk, and sit. The associate is frequently required to use hands to touch, handle, and feel, and to reach with hands and arms. The associate must be able to occasionally lift and/or move up to 25lbs/12kg. -Specific vision abilities required by this job include close vision, distance vision, peripheral vision, depth perception and the ability to adjust focus.
Penske is an Equal Opportunity Employer.
Auto-ApplySr Pr Eng Data Engineering
Data engineer job in Spring House, PA
At Johnson & Johnson, we believe health is everything. Our strength in healthcare innovation empowers us to build a world where complex diseases are prevented, treated, and cured, where treatments are smarter and less invasive, and solutions are personal. Through our expertise in Innovative Medicine and MedTech, we are uniquely positioned to innovate across the full spectrum of healthcare solutions today to deliver the breakthroughs of tomorrow, and profoundly impact health for humanity. Learn more at *******************
**Job Function:**
Data Analytics & Computational Sciences
**Job Sub** **Function:**
Data Engineering
**Job Category:**
Scientific/Technology
**All Job Posting Locations:**
Cambridge, Massachusetts, United States of America, Spring House, Pennsylvania, United States of America
**Job Description:**
**Data Lake Engineer and Solution Architect, R&D Therapeutics Discovery**
At Johnson & Johnson, we believe health is everything. Our strength in healthcare innovation empowers us to build a world where complex diseases are prevented, treated, and cured, where treatments are smarter and less invasive, and solutions are personal. Through our expertise in Innovative Medicine and MedTech, we are uniquely positioned to innovate across the full spectrum of healthcare solutions today to deliver the breakthroughs of tomorrow and profoundly impact health for humanity. Learn more at ******************* .
**About Innovative Medicine**
Our expertise in Innovative Medicine is informed and inspired by patients, whose insights fuel our science-based advancements. Visionaries like you work on teams that save lives by developing the medicines of tomorrow.
Join us in developing treatments, finding cures, and pioneering the path from lab to life while championing patients every step of the way.
Learn more at *******************/innovative-medicine
**We are searching for the best talent for Data Lake Engineer and Solution Architect, R&D Therapeutics Discovery in Spring House, PA or Beerse, Belgium.**
The Data Lake Engineer and Solution Architect is responsible for designing, optimizing, and operationalizing the data lake to serve high-dimensional biology teams, including High-Content Imaging, High-Throughput Transcriptomics, High-Throughput Proteomics among others. The candidate will optimize data models for high-dimensional biology data teams, make high-dimensional data AI/ML ready, tune storage and query performance for large-scale combined analyses across high-dimensional modalities, and deliver a standardized API for programmatic access.
**What You'll Do**
+ Design scalable data models and optimize schemas for high-dimensional biological data.
+ Architect and tune data lakes for performance and cost efficiency.
+ Develop standardized APIs and SDKs for secure, streamlined data access.
+ Collaborate with scientific teams and vendors to deliver platform capabilities.
+ Maintain documentation and train users on best practices.
+ Implement governance, security, and compliance frameworks.
**What We're Looking For**
+ Degree in Computer Science, Data Engineering, Bioinformatics, or related field; advanced degree (MS/PhD) preferred.
+ 7+ years in data/platform engineering, including 3+ years with data lakes.
+ Experience with biological data (omics, imaging) and analytic workflows.
+ Hands-on expertise with Snowflake, SQL at scale, and cloud platforms.
+ Strong programming and scripting skills (Python, SQL), and pipeline orchestration tools.
+ Proven ability to design APIs and communicate technical trade-offs effectively.
**Core Expertise**
+ Data modeling and schema optimization.
+ Performance tuning for data lakes and queries.
+ API development and secure data access.
+ Governance, lineage, and metadata management.
+ Cloud-based data platforms and orchestration tools.
+ Programming in Python and SQL.
**Preferred Qualifications**
+ Familiarity with ML infrastructure and feature stores.
+ Advanced Snowflake optimization and cost-control strategies.
+ Knowledge of data catalog tools and metadata standards.
+ Experience with containerization and CI/CD for data pipelines.
+ Background in omics or high-dimensional imaging pipelines.
**Required Skills:**
**Preferred Skills:**
Advanced Analytics, Agility Jumps, Consulting, Continuous Improvement, Critical Thinking, Data Engineering, Data Governance, Data Modeling, Data Privacy Standards, Data Science, Execution Focus, Hybrid Clouds, Mentorship, Tactical Planning, Technical Credibility, Technical Development, Technical Writing, Technologically Savvy
**The anticipated base pay range for this position is :**
.
Additional Description for Pay Transparency:
.