Post Job

Data Engineer Jobs At POWER Engineers

- 7111 Jobs
  • Senior Software Engineer

    Power Engineers 4.5company rating

    Data Engineer Job At POWER Engineers

    Secondary Locations Job Code **17967** \# of openings **1** Apply Now (**************************************************** Requisition?org=POWERENGINEERS&cws=44&rid=17967) **Senior Software Engineer** POWER's Digital Utility Business Unit is seeking a Senior Software Engineer to join our team and play a key role in designing, developing, testing and maintaining GIS software solutions. This position is critical in ensuring our applications meet industry standards, company policies, and best practices while delivering high-quality solutions to our clients. The ideal candidate will have a strong background in software development, GIS technologies, and a passion for innovation. The selected individual may be able to work remotely or out of one of our many office locations. Summary of Roles and Responsibilities: + The Senior Software Engineer will have the following roles and responsibilities: + Design, develop, and maintain GIS software applications and integrations. + Ensure code quality, performance, and scalability through best practices. + Collaborate with cross-functional teams to define requirements and deliver solutions. + Integrate GIS data with other enterprise systems, databases, and third-party applications. + Troubleshoot and resolve technical issues in GIS applications. + Stay updated with the latest GIS technologies and industry trends. + Take initiative in problem-solving and contribute innovative ideas. + Work independently while also being a strong team player supporting colleagues and fostering collaboration. + Demonstrate a proactive, go-getter attitude to drive projects forward and meet deadlines. + Travel up to 20% to client sites for assessments, system implementations, and stakeholder meetings. **Required Education/Experience:** The Senior Software Engineer must have the following education, experience, skills and knowledge: + Bachelor's or Master's degree in Computer Science, Mathematics, Electrical Engineering, or equivalent experience. + Strong proficiency in modern programming principles and algorithms. + 5+ years as a Software Engineer, preferably with experience in GIS. + 5+ years in developing end-user applications and interfaces in C++, Java, C#, Go, Python, JavaScript, or similar languages. + Experience with Scrum/Agile methodologies. + Experience with geospatial libraries and REST APIs. + Strong analytical and problem-solving skills. + Excellent communication and management skills to translate complex technical concepts into business-friendly language. + Ability to work in a dynamic, fast-paced consulting environment, managing multiple projects simultaneously. **Desired Education/Experience:** The following experience is desirable but not a deal-breaker. + Experience with Utilities GIS data, including electric, gas, water, wastewater. + Experience with Esri's ArcGIS Suite, including Esri's services-based architecture. + Experience Esri Utility Network. + Experience in cloud-based solutions, including Docker or similar container technologies. At POWER Engineers, you can have a rewarding career on every level. Our philosophy is simple: Do Good. Have Fun. Build Success. You'll work on fun and challenging projects and initiatives. You'll have the chance to make a positive impact on society and the environment and you'll find the support, coaching and training it takes to advance your career. We get to make POWER a great place to work. That includes providing competitive compensation, professional development, and a full benefit package: + Medical/Dental/Vision + Paid Holidays + Vacation/Paid Sick Leave + Voluntary Life Insurance + 401K + Telehealth Benefit covers all providers + Maternity and Paternity Leave + New Dads and Moms Benefit program + Fertility Benefits + Gender affirming care POWER is a fun engineering firm. That might seem contradictory to some, but it works for us! Base Salary Range: $125,000 - $145,000 per year The range for this position is being displayed in compliance with all state and local regulations. Salaries are set based on a number of factors to include an individual's job-related knowledge, skills, experience, and education. This means that no two candidates are alike. The range provided above does not include additional compensation such as bonus, health benefits, vacation, 401(k) match, etc. **POWER Engineers, Member of WSP, is an Equal Opportunity Employer, including women, minorities, veterans, and individuals with disabilities.**
    $125k-145k yearly 9d ago
  • Senior Data Engineer

    Cypress HCM 3.8company rating

    San Francisco, CA Jobs

    Own the tooling, frameworks, data pipelines, integrations, databases, schemas and data integrity of this environment. You are a critical member of the team in supplying the data that underpins key operations, uncover hidden issues, and powers insight through reports, analytics and dashboards owned and managed by our team. Our operations team will depend on you to find, source, prepare and supply the right data. Our marketing and sales team will depend on you to serve the right data quickly and reliably to our customers so they can make decisions. Our research team relies on you to deliver reports to inform them of their decisions. Responsibilities: Enable our research team to make data-driven decisions based on timely, consistent, complete, and correct data, reports and analytics Work closely with engineering team members to support our existing ETL processes and solving our analytics needs Be the subject matter expert in all things data pipelines, data lakes and warehouse, tooling & frameworks, integration and sourcing of data Collaborate with cross-functional teams to gather and analyze business requirements. Develop and optimize ETL processes to ensure data accuracy and integrity. Implement data governance and security measures to protect sensitive information. Provide technical leadership and mentorship to other BI engineers. Possess the vision to guide us to the next stage of our data journey. Be detail-oriented, with a healthy level of skepticism that drives you to investigate even minor data discrepancies. Quickly learn new languages, tools, and frameworks, and be willing to explore uncharted territories to achieve your objectives. Maintain curiosity about innovative approaches and strive to push the boundaries of existing methodologies. Excel in a collaborative and team-oriented environment. Required Skills: B.S. in Computer Science, or related data field 6+ years of experience in Data Engineering, ETL, Data Science or related role Expert knowledge of Azure, Data Factories, SQL, Fabric Proficiency in BI tools such as Tableau, Power BI, or Looker. Proficiency in ETL tools and processes. Knowledge of Snowflake Excellent problem-solving and analytical skills. Strong communication and collaboration abilities. Experience in the financial services or investment industry Compensation: $170-$200k, DOE
    $170k-200k yearly 12d ago
  • Data Engineer

    Robert Half 4.5company rating

    Swanton, OH Jobs

    • Salary $90,000-$100,000 (based of years' experience 2-4 years) • Bonus 5% annual salary • PTO 13 days (104 hours) + 2 Paid Sick Leave days + + 1 day off with signed Annual Physical form from physician, 10 company paid holidays. • 401K and Profit sharing that goes into 401K • 100% onsite, once up to speed will be able to do 1 day a week remote. Top Skills: • Experience with Azure services: Data Factory, Databricks, Synapse Analytics, Data Lake, • ETL and Microsoft Fabric experience and proficiency with SQL
    $90k-100k yearly 20d ago
  • AI Data Engineer

    Aquent 4.1company rating

    Redmond, WA Jobs

    Responsibilities: Daily data analysis to find the gaps and curate the information in the systems working with teams in Azure to ensure data hygiene is maintained in source systems. Identify data sources and prepare queries using query languages (Kusto, SQL etc.) for data extraction and prepare reports in Azure Data Explorer or Power BI Design process, data models, reports to surface insights related to all aspects of change management and various workstreams. Work with services and various Azure Teams to capture the learnings from incidents and store them at centralized location or model for consistent retrieval. Propose, plan and expedite “identified” process automations to improve efficiency. Track and Analyze incidents on day-to-day basis for services for causation and if in scope to understand any association with deployments reviewed/not reviewed and provide the updates to the team. Provide feedback for any recommended data improvements by services or respective follow-up into the program workstreams. Plans and creates efficient techniques and operations (e.g., inserting, aggregating, joining) to transform raw data into a form that is compatible with downstream data sources, databases, and formats. Independently uses software, query languages, and computing tools to transform raw data across end-to-end pipelines. Evaluates data to ensure data quality and completeness using queries, data wrangling, and statistical techniques. Merges data into distributed systems, products, or tools for further processing. Requirements: Proficient in writing Kusto queries, tuning and improving the performance of existing queries with knowledge of data concepts related to structured and non-structured databases. Proficient in developing data models (database/tables) to create the reports in Azure Data Explorer or Power BI or relevant BI tool to deliver insightful reports and data. Proficient to use analytics, which consists of the discovery, interpretation, and communication of meaningful patterns in data (complex and large datasets). This includes applying data patterns towards effective decision making. Ability to identify rules, principles, trends, patterns and/or relationships that explain facts, data, or other information; and to analyze information, make correct inferences, and draw accurate conclusions. Proficient in developing Power Apps, Power Automate flows, Lens jobs for data retrieval or storage or other required automations. Ability to communicate effectively with various Azure teams to closely work on troubleshooting, data analysis and/or reporting. Required/Preferred Qualifications: Bachelor's Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 5+ years experience in business analytics, data science, software development, data modeling or data engineering work OR Master's Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 3+ years of business analytics, data science, software development, data modeling or data engineering work experience OR equivalent experience.
    $97k-125k yearly est. 14h ago
  • MarTech Data Science

    Russell Tobin 4.1company rating

    San Jose, CA Jobs

    Job Title: Senior Data Scientist (MarTech) Duration: 12+ months Contract (Potential to extend) Pay: $100/$110/Hr on W2 (depends upon experience) MarTech Data Science Measurement empowers Client to optimize marketing ROI by generating data-driven recommendations. We lead the way in defining and advancing best practices for measuring and optimizing marketing impact. We collaborate with Marketing, Finance, and Engineering to provide actionable recommendations and tools based on effective, timely, and granular measurements. Our team's tenets are: Actionable: Deliver insights that drive confident business decisions. Impactful: Prioritize projects based on their expected value to Client. Balanced: Adapt methods to business questions and data realities, acknowledging limitations. Rigorous: Maintain methodological integrity and quantify the sensitivity of findings. Innovative: Invest in advancing measurement science and developing new methods. Influential: Share learnings across Client and the broader data science community. The difference you will make: We are seeking an experienced Data Scientist with deep expertise and experience building data pipelines and automating measurement systems. The ideal candidate will have experience working with Apache Airflow and Spark, as well as a comprehensive understanding of SQL. They are expected to be fluent in multiple statistical programming languages, particularly R and Python. They should also be familiar with causal inference methodologies, including experimental and observational approaches. The ideal candidate will be capable of hands-on data work to produce insights and recommendations for stakeholders, driving impactful decisions in our marketing efforts. A typical day: Data Pipelines: Build novel data pipelines that accelerate the work of our data science team, including automating data validation, unit testing, and other common processes. Productionalizing Prototypes: Refine prototypes built by other scientists and prepare them to be productionalized within Client's systems. Causal Inference: Apply and develop causal inference methods, especially around MMM, to estimate the effectiveness of Client's marketing initiatives. Data Analysis: Conduct data pulls, analyze trends, and create new features to support measurement efforts. Collaboration: Work effectively with cross-functional teams, providing insights that optimize marketing strategies. Your expertise: PhD in Economics, Statistics, Marketing, or a related field, or a Masters Degree in a similar field with 2+ years of experience. Deep knowledge of data management tools, systems, and processes, including Apache Airflow and Spark. Proficiency in statistical programming (Python and R) and database usage (SQL). Experience with media mix modeling (MMM). Ability to communicate complex concepts clearly to stakeholders at varying technical levels. Proven track record of solving business problems through data science methods. Preferred expertise: Passion for marketing and consumer science, with a desire to stay informed about the latest advances in the field. Familiarity with Bayesian modeling and its applications in marketing. Experience with causal ML modeling. Experience in developing end-to-end models for data-driven decision-making.
    $100-110 hourly 14h ago
  • MarTech Data Science

    Russell Tobin 4.1company rating

    Santa Rosa, CA Jobs

    Job Title: Senior Data Scientist (MarTech) Duration: 12+ months Contract (Potential to extend) Pay: $100/$110/Hr on W2 (depends upon experience) MarTech Data Science Measurement empowers Client to optimize marketing ROI by generating data-driven recommendations. We lead the way in defining and advancing best practices for measuring and optimizing marketing impact. We collaborate with Marketing, Finance, and Engineering to provide actionable recommendations and tools based on effective, timely, and granular measurements. Our team's tenets are: Actionable: Deliver insights that drive confident business decisions. Impactful: Prioritize projects based on their expected value to Client. Balanced: Adapt methods to business questions and data realities, acknowledging limitations. Rigorous: Maintain methodological integrity and quantify the sensitivity of findings. Innovative: Invest in advancing measurement science and developing new methods. Influential: Share learnings across Client and the broader data science community. The difference you will make: We are seeking an experienced Data Scientist with deep expertise and experience building data pipelines and automating measurement systems. The ideal candidate will have experience working with Apache Airflow and Spark, as well as a comprehensive understanding of SQL. They are expected to be fluent in multiple statistical programming languages, particularly R and Python. They should also be familiar with causal inference methodologies, including experimental and observational approaches. The ideal candidate will be capable of hands-on data work to produce insights and recommendations for stakeholders, driving impactful decisions in our marketing efforts. A typical day: Data Pipelines: Build novel data pipelines that accelerate the work of our data science team, including automating data validation, unit testing, and other common processes. Productionalizing Prototypes: Refine prototypes built by other scientists and prepare them to be productionalized within Client's systems. Causal Inference: Apply and develop causal inference methods, especially around MMM, to estimate the effectiveness of Client's marketing initiatives. Data Analysis: Conduct data pulls, analyze trends, and create new features to support measurement efforts. Collaboration: Work effectively with cross-functional teams, providing insights that optimize marketing strategies. Your expertise: PhD in Economics, Statistics, Marketing, or a related field, or a Masters Degree in a similar field with 2+ years of experience. Deep knowledge of data management tools, systems, and processes, including Apache Airflow and Spark. Proficiency in statistical programming (Python and R) and database usage (SQL). Experience with media mix modeling (MMM). Ability to communicate complex concepts clearly to stakeholders at varying technical levels. Proven track record of solving business problems through data science methods. Preferred expertise: Passion for marketing and consumer science, with a desire to stay informed about the latest advances in the field. Familiarity with Bayesian modeling and its applications in marketing. Experience with causal ML modeling. Experience in developing end-to-end models for data-driven decision-making.
    $100-110 hourly 14h ago
  • Lead Data Engineer

    Ledgent Technology 3.5company rating

    Newport Beach, CA Jobs

    As a Lead Data Engineer you will fill a new role Contract-to-hire role that sits on the data team in the technology organization. Your colleagues will include scrum masters and data analyst and fellow Data, AI, Governance, and QA professionals. Join our highly collaborative, innovative team. How you'll help move us forward: This role will leading a Jr team member and contractors * Partner with data architects, analysts, engineers, and stakeholders to understand data requirements and deliver solutions. * Design & build scalable products with robust security, quality and governance protocols. * Lead delivery of data engineering pipelines to enable Sales, Marketing, Advanced Analytics and CRM use-cases * Create low-level design artifacts, including mapping specifications. * Build scalable and reliable data pipelines to support data ingestions (batch and /or streaming) and transformation from multiple data sources using cloud based technologies. * Create unit/integration tests and implement automated build and deployment. * Participate in code reviews to ensure standards and best practices. * Deploy, monitor, and maintain production systems. * Use the Agile Framework to organize, manage and execute work. * Demonstrate adaptability, initiative and attention to detail through deliverables and ways of working. The experience you bring: * Bachelor's degree in Computer Science, Data Science or Statistics * 8+ years of experience in analysis, design, development, and delivery of data * 8+ years of experience and proficiency in SQL, ETL, ELT, leading cloud data warehouse technologies, data transformation, and data management tools * Understanding of data engineering best practices and data integration patterns * 2+ years of experience with DevOps and CI/CD * 1+ years of experience (not just POC) in using Git and Python * Experience in agile methodologies. * Effective communication & facilitation; both verbal and written * Team-Oriented: Collaborating effectively with team and stakeholders * Analytical Skills: Strong problem-solving skills with ability to breakdown complex data solutions What makes you stand out: * Experience with Customer, Product, or Contract data domains in Financial Service industry * Experience building Customer Data Platforms (CDP) or Customer 360 * Experience working with Azure Dev Ops (ADO), Build and Release CI/CD pipelines and orchestration * Expertise in Snowflake and DBT * Strong communicator, with skills leading and facilitating cross-functional teams * Experience with automation, scripting, and testing in data delivery environment * Understanding of data catalogs, glossary, data quality, and effective data governance * Financial Services/Investments/Insurance domain knowledge * Experience working in complex data systems Desired Skills and Experience Roth Staffing is looking for Data Engineer III for the client All qualified applicants will receive consideration for employment without regard to race, color, national origin, age, ancestry, religion, sex, sexual orientation, gender identity, gender expression, marital status, disability, medical condition, genetic information, pregnancy, or military or veteran status. We consider all qualified applicants, including those with criminal histories, in a manner consistent with state and local laws, including the California Fair Chance Act, City of Los Angeles' Fair Chance Initiative for Hiring Ordinance, and Los Angeles County Fair Chance Ordinance. For unincorporated Los Angeles county, to the extent our customers require a background check for certain positions, the Company faces a significant risk to its business operations and business reputation unless a review of criminal history is conducted for those specific job positions.
    $117k-161k yearly est. 13d ago
  • Matillion Data Engineer

    Mastech Digital 4.7company rating

    Dallas, TX Jobs

    W2 only Hybrid in Dallas ,TX/Jersey City, NJ Title: Cloud Data Engineer -Matillion Data Engineer Duration: Long term (ONLY W2) Job Description: Bachelor's degree, preferably in Computer Science, Information Technology, Computer Engineering, or related IT discipline; or equivalent experience 6+ years' experience as a Cloud Data Engineer 6+ years' hands on experience in Snowflake and Matillion 4+ years' experience in Python with a focus on data processing and analytics 4+ years in consulting 2+ years in healthcare domain ( preferably Provider ) Strong knowledge and hands-on experience in designing, developing, and deploying scalable solutions on the cloud platforms Expertise in SQL and database technologies for data manipulation and queryin Ability to travel 10%, on average, based on the work you do and the clients and industries/sectors you serve Preferred Live near or willing to relocate to Dallas or New Jersey. Hybrid work model. Able to commute to office 1-2 days a week as needed. Expertise with data modeling, data warehousing, and data integration concepts Experience with DevOps practices, CI/CD pipelines, and infrastructure as code (IAAC) using tools like Jenkins, Git, and Terraform. Strong analytical and problem-solving skills, with the ability to troubleshoot and resolve complex technical issues. Familiarity with agile development methodologies and experience working in Agile teams Analytical/ decision making responsibilities Analytical ability to manage multiple projects and prioritize tasks into manageable work products Can operate independently or with minimum supervision Excellent communication skills Ability to deliver technical demonstrations
    $81k-107k yearly est. 13d ago
  • Data Scientist

    The Lab Consulting 4.1company rating

    Houston, TX Jobs

    We are a mid-sized Management Consulting, Automation, and Data/Process Science firm, established in 1993, serving Fortune 1000 companies throughout North America. We have developed a unique, template-based and data-centric approach to our client projects, which are conducted off-site from our Houston office. The Lab is proud to announce we have invested in a new office build out in the Galleria area. We are mindful of employee experience and currently operate at 50% capacity in the office. We are seeking a data scientist who is passionate about business processes, automation, operational data measurement, and the intellectual challenge analyzing them offers. The person we seek has previous experience in successful data science roles, performing strategic analysis and, or, operations improvement projects. The data scientist will be part of a management consulting and data science team that performs analysis on client data and assists with the development, implementation and integration of pioneering solutions using different methods, techniques and tools. You will ensure the rigor and underlying logic of the team's findings, optimize the analytical storyline and develop superior, easy to comprehend documentation operational analysis. The data scientist will be responsible for gathering, analyzing and documenting business processes, developing business cases, developing analytics dashboards and providing domain knowledge to the team. The ideal candidate's favorite words are learning, data, scale, and agility. You will leverage your strong collaboration skills and ability to extract valuable insights from highly complex data sets to ask the right questions and find the right answers. Simultaneously, you will help senior management further standardize the consulting tasks and related work product with the objectives of: reducing analytical cycle time, lowering labor costs and reducing document rework and editing. As you become more familiar with our product offering, you will also contribute to the refinement and extension of our findings and tools database/website which includes benchmarks, best practices and thousands of business process maps. Responsibilities Interface with clients to gather operational data for analysis Analyze raw data from consulting client projects across multiple industries: assessing quality, cleansing, structuring for downstream processing Design accurate and scalable prediction algorithms Collaborate with team to bring analytical prototypes to production Generate actionable insights for business improvements Work alongside clients and internal team members to develop interactive, customer-facing business dashboards Work with internal team members to develop methods to transform data to prepare for analysis and reporting Manage the structure and functionality of our internal databases Maintain and build tools to assist our research teams in updating, organizing and expanding existing database content Navigate client roadblocks that slow down projects Proactively report to internal team and clients on overall progress, potential issues, areas of potential improvement, etc. Qualifications Bachelor's degree or equivalent experience in quantitative field (Statistics, Mathematics, Computer Science, Engineering, etc.) At least 1 - 2 years' of experience in quantitative analytics or data modeling Deep understanding of predictive modeling, machine-learning, clustering and classification techniques, and algorithms Fluency in a programming language (Python, C,C++, Java, SQL) Familiarity with Big Data frameworks and visualization tools (Cassandra, Hadoop, Spark, Tableau)
    $73k-105k yearly est. 13d ago
  • Business Data Engineer

    Brooksource 4.1company rating

    Minersville, PA Jobs

    Hybrid (Pipersville, PA or Houston, TX) 6 month Contract to Hire The Business Data Engineer Level 2 serves as a technical escalation point within the organization's data infrastructure framework. This role involves advanced reporting, data analysis, and automation. The role also involves designing and developing automated reports, dashboards, and data pipelines while handling larger program updates. Primary Accountabilities and Job Activities: Perform advanced reporting and data analysis to support business operations. Develop customized business reports, dashboards, and self-service tools for decision-makers using Power BI, AWS QuickSight, and other BI tools. Utilize PowerShell and Power Automate to develop automated workflows, data extractions, and job scheduling. Debug database scripts, troubleshoot reports, and resolve technical conflicts. Build and document data pipelines, ETL processes, and data transformation workflows. Monitor and triage issues in the BDE Service Desk queue, responding within the defined Service Level Objectives (SLOs). Validate data sets and reports to ensure accuracy, consistency, and quality. Identify opportunities to improve data reliability, efficiency, and integrity. Develop scripts/tools to extract, transform, and load (ETL) data from multiple sources. Support Software Quality Assurance (SQA) efforts, including test design, execution, and UAT. Assist in IT projects as needed, including system upgrades and integrations. Provide Level I support escalation for complex technical issues. Collaborate with Supervisors/Level III Engineers on project execution, testing, and troubleshooting. Serve as an SME in one or more specialized areas, offering guidance and best practices. Develop automated report schedules, security policies, and backend job management. Manage data synchronization between third-party platforms, including e-commerce and accounting systems. Maintain Standard Operating Procedures (SOPs) and ensure thorough documentation of data processes. Qualifications: Bachelor's Degree in Computer Science, Statistics, Mathematics, Engineering, Economics, or a related field is preferred 3-5 years of experience in data engineering, analytics, or a similar technical field. Strong expertise in MS SQL, including writing ETL queries and managing relational databases. Experience with Microsoft Power Platform tools, including Power BI, Power Apps, and Power Automate. Experience with custom development and custom reporting involving ERP systems, E-Commerce data backends and distribution center inventory systems required Proficiency in PowerShell for automation and scripting. Experience with software development and deployment management tools, such as Jira, Confluence, Jenkins, Git. Experience with Microsoft SSRS, Talend, Amazon Redshift, and Amazon Athena preferred. Experience with AWS QuickSight or other BI tools for dashboarding and analytics is a plus Excellent problem-solving skills, ability to analyze large and unstructured datasets, and detect trends or anomalies. Strong communication and presentation skills, with attention to detail. Ability to manage multiple projects and meet deadlines in a fast-paced environment. Persistence in ensuring data accuracy, quality, and security. Ability to interact with end users for troubleshooting and training.
    $91k-125k yearly est. 20d ago
  • MarTech Data Science

    Russell Tobin 4.1company rating

    Fremont, CA Jobs

    Job Title: Senior Data Scientist (MarTech) Duration: 12+ months Contract (Potential to extend) Pay: $100/$110/Hr on W2 (depends upon experience) MarTech Data Science Measurement empowers Client to optimize marketing ROI by generating data-driven recommendations. We lead the way in defining and advancing best practices for measuring and optimizing marketing impact. We collaborate with Marketing, Finance, and Engineering to provide actionable recommendations and tools based on effective, timely, and granular measurements. Our team's tenets are: Actionable: Deliver insights that drive confident business decisions. Impactful: Prioritize projects based on their expected value to Client. Balanced: Adapt methods to business questions and data realities, acknowledging limitations. Rigorous: Maintain methodological integrity and quantify the sensitivity of findings. Innovative: Invest in advancing measurement science and developing new methods. Influential: Share learnings across Client and the broader data science community. The difference you will make: We are seeking an experienced Data Scientist with deep expertise and experience building data pipelines and automating measurement systems. The ideal candidate will have experience working with Apache Airflow and Spark, as well as a comprehensive understanding of SQL. They are expected to be fluent in multiple statistical programming languages, particularly R and Python. They should also be familiar with causal inference methodologies, including experimental and observational approaches. The ideal candidate will be capable of hands-on data work to produce insights and recommendations for stakeholders, driving impactful decisions in our marketing efforts. A typical day: Data Pipelines: Build novel data pipelines that accelerate the work of our data science team, including automating data validation, unit testing, and other common processes. Productionalizing Prototypes: Refine prototypes built by other scientists and prepare them to be productionalized within Client's systems. Causal Inference: Apply and develop causal inference methods, especially around MMM, to estimate the effectiveness of Client's marketing initiatives. Data Analysis: Conduct data pulls, analyze trends, and create new features to support measurement efforts. Collaboration: Work effectively with cross-functional teams, providing insights that optimize marketing strategies. Your expertise: PhD in Economics, Statistics, Marketing, or a related field, or a Masters Degree in a similar field with 2+ years of experience. Deep knowledge of data management tools, systems, and processes, including Apache Airflow and Spark. Proficiency in statistical programming (Python and R) and database usage (SQL). Experience with media mix modeling (MMM). Ability to communicate complex concepts clearly to stakeholders at varying technical levels. Proven track record of solving business problems through data science methods. Preferred expertise: Passion for marketing and consumer science, with a desire to stay informed about the latest advances in the field. Familiarity with Bayesian modeling and its applications in marketing. Experience with causal ML modeling. Experience in developing end-to-end models for data-driven decision-making.
    $100-110 hourly 14h ago
  • Principal Data Scientist

    Robert Half 4.5company rating

    Atlanta, GA Jobs

    Principal Data Scientist (Mixed Modelling, forecasting and retail industry experience) Robert Half is in search of a Principal Data scientist who will have ownership of identifying/understanding the business opportunity that the company data presents and converting them into actionable, integrated, and advanced analytics/predictive solutions. Data scientist will work on creating data-driven applications that solve business problems for the company's customers. does not offer sponsorship or C2C hybrid 3 days onsite Responsibilities Work with the company and partners to identify Business opportunities/problems and deliver actionable insights/predictive solutions that will result in Business value/growth for the company's customers. Be at the forefront of research on how machine learning and artificial intelligence capabilities are maturing in the industry and what libraries, tools & technologies company needs to be successful in delivering, scaling, and supporting solutions Subject matter expert for generating insights from data and machine learning models Transform data and insights into intuitive & interactive visualizations Work with contractors to deliver products and services for the company's customers. Conduct Pilots with the company's customers and measure value Transition successful pilots into commercial applications Ensure design and implementation of technologies that comply with security standards and application architecture principles (in alignment with platform architects and respective review boards) and that consider state of the art data and analytics concepts Review code/solutions developed by team prior to migration to production What makes you a good fit? 8+ years of experience building Data science solutions Background in computer science, statistics, mathematics Strong algorithm forecasting experience Strong experience with MMM Retail experience- Food and Beverage or Consumer Packaged Goods (CPG) industry experience Good understanding of data structures, algorithms, and software design Experience with machine learning and natural language processing Ability to code in at least one JVM based language (Java, Scala etc.) and at least one scripted language (Python, JavaScript, etc.) Experience with database design, and SQL Ability to balance high customer orientation and service attitude with business priorities Analytic thinking, and problem-solving skills High energy, with strong will and ambition to learn and work on new things Outstanding proven verbal, written and interpersonal communication skills Ability to adapt quickly to changing product scope and priorities (demand driven) Proven ability to influence and collaborate effectively with cross-functional teams Demonstrates noticeable commitment to foster and preserve a culture of diversity and inclusion by creating environments where people of diverse backgrounds are excited to bring all of who they are and do their best work Bachelor's degree (or equivalent)
    $69k-98k yearly est. 8d ago
  • Senior Data Engineer

    Strategic Solutions LLC 4.2company rating

    Dallas, TX Jobs

    We are seeking an experienced Data Engineer to expand and optimize our data pipeline architecture while improving data flow and collection across cross-functional teams. The ideal candidate is a skilled data pipeline builder and data wrangler who thrives on developing efficient data systems from the ground up. In this role, you will collaborate with software developers, database architects, data analysts, and data scientists to support data initiatives and ensure a consistent, optimized data delivery architecture. You will play a key role in aligning data systems with business goals while maintaining efficiency across multiple teams and systems. Responsibilities: Design, build, and maintain scalable data pipelines. Develop large, complex data sets to meet functional and non-functional business requirements. Automate manual processes, optimize data delivery, and improve infrastructure scalability. Build and manage ETL/ELT workflows for efficient data extraction, transformation, and loading from diverse sources. Develop analytics tools that leverage data pipelines to provide actionable business insights. Support data-related technical issues and optimize data infrastructure for various business functions. Collaborate with data scientists and analysts to enhance data-driven decision-making. Write complex SQL queries and develop database objects (e.g., stored procedures, views). Implement data pipelines using tools like SSIS, Azure Data Factory, Hadoop, Spark, or other ETL/ELT platforms. Maintain comprehensive documentation on data pipelines, processes, and workflows. Requirements: Bachelor's degree in Computer Science, Statistics, Informatics, Information Systems, or related field (or equivalent experience). 4-6 years of experience as a Data Engineer. Strong proficiency in SQL and T-SQL programming. Hands-on experience with ETL processes, data migration, and pipeline optimization. Experience working with relational (SQL) and NoSQL databases. Proficiency in Python or other object-oriented scripting languages. Experience analyzing and processing large, disconnected datasets. Expertise in Cloud technologies (Azure preferred; GCP or AWS also considered). Familiarity with Snowflake (a plus). Experience with message queuing and stream processing tools (Pub-Sub, Azure Event Grid, Kafka, etc.). Knowledge of data pipeline and workflow management tools (Airflow, Prefect, Apache NiFi, etc.). Experience with machine learning is a plus. Strong analytical and problem-solving skills. Ability to work in an Agile development environment. If you're a data-driven professional passionate about building scalable, high-performance data infrastructure, we'd love to hear from you! Apply now!
    $62k-89k yearly est. 14h ago
  • Data Engineer Architect

    Litmus7 4.2company rating

    Los Angeles, CA Jobs

    Data Engineering Architect We are seeking an experienced Data Engineering Architect to design and implement scalable data solutions for our organization. The ideal candidate will have deep expertise in cloud data warehousing, ETL/ELT processes, data modeling, and business intelligence. Responsibilities: • Design and architect end-to-end data solutions leveraging AWS Redshift, Apache Airflow, dbt, and other modern data tools • Develop data models and implement data pipelines to ingest, transform, and load data from various sources into our Redshift data warehouse • Create and maintain Apache Airflow DAGs to orchestrate complex data workflows and ETL processes • Implement data transformations and modeling using dbt to ensure data quality and consistency • Design and optimize Redshift clusters for performance, scalability, and cost-efficiency • Collaborate with data analysts and scientists to expose data through Tableau dashboards and reports • Establish data governance practices and ensure data security/compliance • Mentor junior data engineers and promote best practices across the data team • Evaluate new data technologies and make recommendations to improve our data architecture Requirements: • 7+ years of experience in data engineering, with at least 3 years in an architect role • Deep expertise with AWS Redshift, including data modeling, query optimization, and cluster management • Strong experience with Apache Airflow for workflow orchestration and scheduling • Proficiency with dbt for data transformation and modeling • Experience creating dashboards and reports in Tableau • Excellent SQL skills and experience with Python • Knowledge of data warehousing concepts and dimensional modeling • Strong communication skills and ability to work cross-functionally • Bachelor's or Master's degree in Computer Science, Engineering, or related field
    $112k-160k yearly est. 2d ago
  • Data Engineer - AI & ML

    Theron Solutions 4.1company rating

    San Francisco, CA Jobs

    Responsibilities: 1.Design and Build Data Pipelines: •Develop, construct, test, and maintain data pipelines to extract, transform, and load (ETL) data from various sources to data warehouses or data lakes. •Ensure data pipelines are efficient, scalable, and maintainable, enabling seamless data flow for downstream analysis and modeling. •Work with stakeholders to identify data requirements and implement effective data processing solutions. 2. Data Integration: •Integrate data from multiple sources such as internal databases, external APIs, third-party vendors, and flat files. •Collaborate with business teams to understand data needs and ensure data is structured properly for reporting and analytics. •Build and optimize data ingestion systems to handle both real-time and batch data processing. 3. Data Storage and Management: •Design and manage data storage solutions (e.g., relational databases, NoSQL databases, data lakes, cloud storage) that support large-scale data processing. •Implement best practices for data security, backup, and disaster recovery, ensuring that data is safe, recoverable, and complies with relevant regulations. •Manage and optimize storage systems for scalability and cost efficiency. 4. Data Transformation: •Develop data transformation logic to clean, enrich, and standardize raw data, ensuring it is suitable for analysis. •Implement data transformation frameworks and tools, ensuring they work seamlessly across different data formats and sources. •Ensure the accuracy and integrity of data as it is processed and stored. 5. Automation and Optimization: •Automate repetitive tasks such as data extraction, transformation, and loading to improve pipeline efficiency. •Optimize data processing workflows for performance, reducing processing time and resource consumption. •Troubleshoot and resolve performance bottlenecks in data pipelines. 6. Collaboration with Data Teams: •Work closely with Data Scientists, Analysts, and business teams to understand data requirements and ensure the correct data is available and accessible. •Assist Data Scientists with preparing datasets for model training and deployment. •Provide technical expertise and support to ensure the integrity and consistency of data across all projects. 7. Data Quality Assurance: •Implement data validation checks to ensure data accuracy, completeness, and consistency throughout the pipeline. •Develop and enforce data quality standards to detect and resolve data issues before they affect analysis or reporting. •Monitor and improve data quality by identifying areas for improvement and implementing solutions. 8. Monitoring and Maintenance: •Set up monitoring and logging for data pipelines to detect and alert for issues such as failures, data mismatches, or delays. •Perform regular maintenance of data pipelines and storage systems to ensure optimal performance. •Update and improve data systems as required, keeping up with evolving technology and business needs. 9. Documentation and Reporting: •Document data pipeline designs, ETL processes, data schemas, and transformation logic for transparency and future reference. •Create reports on the performance and status of data pipelines, identifying areas of improvement or potential issues. •Provide guidance to other teams regarding the usage and structure of data systems. 10. Stay Updated with Technology Trends: •Continuously evaluate and adopt new tools, technologies, and best practices in data engineering and big data systems. •Participate in industry conferences, webinars, and training to stay current with emerging trends in data engineering and cloud computing. Qualifications: (Please list all required qualifications) Click here to enter text. (Rationalizes basic requirements for candidates to apply. Helps w/rationalization when Requirements: - 1. Educational Background: Bachelor's or Master's degree in Computer Science, Information Technology, Data Engineering, or a related field 2. Technical Skills: •Proficiency in programming languages such as Python, Java, or Scala for data processing. •Strong knowledge of SQL and relational databases (e.g., MySQL, PostgreSQL, MS SQL Server). •Experience with NoSQL databases (e.g., MongoDB, Cassandra, HBase). •Familiarity with data warehousing solutions (e.g., Amazon Redshift, Google BigQuery, Snowflake). •Hands-on experience with ETL frameworks and tools (e.g., Apache NiFi, Talend, Informatica, Airflow). •Knowledge of big data technologies (e.g., Hadoop, Apache Spark, Kafka). •Experience with cloud platforms (AWS, Azure, Google Cloud) and related services for data storage and processing. •Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes) for building scalable data systems. •Knowledge of version control systems (e.g., Git) and collaboration tools (e.g., Jira, Confluence). •Understanding data modeling concepts (e.g., star schema, snowflake schema) and how they relate to data warehousing and analytics. •Knowledge of data lakes, data warehousing architecture, and how to design efficient and scalable storage solutions. 3.Soft Skills: •Strong problem-solving skills with an ability to troubleshoot complex data issues. •Excellent communication skills, with the ability to explain technical concepts to both technical and non-technical stakeholders. •Strong attention to detail and a commitment to maintaining data accuracy and integrity. •Ability to work effectively in a collaborative, team-based environment. 4.Experience: -3+ years of experience in data engineering, with hands-on experience in building and maintaining data pipelines and systems. - Proven track record of implementing data engineering solutions at scale, preferably in large or complex environments. - Experience working with data governance, compliance, and security protocols. 5.Preferred Qualifications**: -Experience with machine learning and preparing data for AI/ML model training. -Familiarity with stream processing frameworks (e.g., Apache Kafka, Apache Flink). -Certification in cloud platforms (e.g., AWS Certified Big Data - Specialty, Google Cloud Professional Data Engineer). -Experience with DevOps practices and CI/CD pipelines for data systems. -Experience with automation and orchestration tools (e.g., Apache Airflow, Luigi). -Familiarity with data visualization and reporting tools (e.g., Tableau, Power BI) to support analytics teams 6.Work Environment: •Collaborative and fast-paced work environment. •Opportunity to work with state-of-the-art technologies. •Supportive and dynamic team culture EOE: Our client is an Equal Opportunity Employer and Prohibits Discrimination and Harassment of Any Kind: We are committed to the principle of equal employment opportunity for all employees and to providing employees with a work environment free of discrimination and harassment. All employment decisions at our client are based on business needs, job requirements, and individual qualifications, without regard to race, color, religion or belief, national, social or ethnic origin, sex (including pregnancy), age, physical, mental or sensory disability, HIV Status, sexual orientation, gender identity and/or expression, marital, civil union or domestic partnership status, past or present military service, family medical history or genetic information, family or parental status, or any other status protected by the laws or regulations in the locations where we operate. We will not tolerate discrimination or harassment based on any of these characteristics.
    $121k-171k yearly est. 22d ago
  • Data Analytics developer/ Data Analytics Engineer

    Tekwissen 3.9company rating

    Saint Louis, MO Jobs

    Job Title: Data Analytics developer/ Data Analytics Engineer Duration: 6 Months Job Type: Contract Work Type: Remote Pay Rate: 45 - 50$/Hourly/W2 TekWissen is a global workforce management provider headquartered in Ann Arbor, Michigan that offers strategic talent solutions to our clients world-wide. This client is a German multinational Pharmaceutical and biotechnology company and one of the largest pharmaceutical companies in the world, headquartered in Leverkusen, and areas of business include pharmaceuticals; consumer healthcare products, agricultural chemicals, seeds and biotechnology products. Job Description: The selected candidate should be able to help support our Data and Process Governance Policies and Standards for data utilization and access. A strong business insight mindset and analytics background will help enable data visualizations, while helping maintain the technical foundation they are built on. Knowledge of industry standard applications and database theory will allow us to transform the business by optimizing our workflow and data processing to deliver data to the shop floor. Additional Info: Interconnectivity between industrial automation data, MIS, and data warehouse. Helping influence and implement an end-to-end vision for how production field data will flow through the organization. Merge new systems or methods with existing data structures. Partner with Supply Chain to define, document, implement and maintain business processes and data workflows. Implement data visualizations and drive reporting solutions based on Engineers, PCIT Developers and End Users needs and feedback. Help our engineers and PCIT developers create data reports for business Will work with the manager, regional automation or process engineers depending on the site. 50% new development/50% maintenance. Team Dynamic/Culture: Currently 1 FT and 5 contractors. The current team that's been working on this project for about 1yr, there is an existing automation team that has been stood up for a while that this team integrates with. Skill Set: Experience in managing and communicating complex projects and collaborating with cross functional teams to accomplish project goals within expected timelines. Knowledgeable on data governance concepts and implementation Data Access Management Change Management Data Historian/ Time Series Data SQL, MSSQL Data modeling for creating/maintaining data integrity between multiple schemas ETL experience Denodo/Data virtualization experience preferred Knowledgeable on deployment process flow concepts Python experience preferred Tableau Server/Desktop/Prep setup and dashboard building Strong initiative, results orientation, and able to work independently with minimal direction. Demonstrated ability to see differing perspectives and work cross-functionally. This is a client facing role so good communication skills, written and verbal, is required. Education: Bachelors would be nice but is not required, associates with experience is acceptable. Experience required is 4-6 years TekWissen Group is an equal opportunity employer supporting workforce diversity.
    $73k-97k yearly est. 13d ago
  • Data Scientist (Big Data Systems)

    Bayside Solutions 4.5company rating

    Cupertino, CA Jobs

    W2 Contract Salary Range: $135,200 - $156,000 per year Duties and Responsibilities: Ad-hoc data analysis and investigations running queries on our big data systems (SQL, Splunk, and HDFS). Assist in tracking and managing data projects to ensure successful completion. Facilitate data access management, including data creation and data onboarding process. Requirements and Qualifications: Proficient in Python and able to automate scripts and tools using Python Experience in Spark to process and query large datasets efficiently Superb communication skills (both verbal and written) with the ability to present the results of analyses in a clear and impactful manner Understanding of algorithms (tweak them when needed) as well as infrastructure that enables fast iterations Experience in documenting (i.e., data schemas, compliance policies, timeline, project management, and status updates) Capable of assisting with data investigations and providing recommendations on leveraging existing data effectively. Project management experience is preferred for oversight of data operations. Desired Skills and Experience Data analysis, access management, Python, Spark, algorithms, documenting, project management, SQL, Splunk, HDFS Bayside Solutions, Inc. is not able to sponsor any candidates at this time. Additionally, candidates for this position must qualify as a W2 candidate. Bayside Solutions, Inc. may collect your personal information during the position application process. Please reference Bayside Solutions, Inc.'s CCPA Privacy Policy at *************************
    $135.2k-156k yearly 21d ago
  • Senior Big Data Engineer (PySpark & Hadoop) - No C2C

    Mindlance 4.6company rating

    Jersey City, NJ Jobs

    Senior Big Data Engineer (PySpark & Hadoop) HIRING DRIVE - DAY of INTERVIEW - Thurs, 03/13 and Fri, 03/14 - all interviews will be conducted on these days. Job Description: We are seeking an experienced Senior Big Data Engineer with a strong background in PySpark and Hadoop to join our direct client in the banking industry. The ideal candidate will have a deep understanding of large-scale data processing and optimization, along with hands-on experience in building high-performance data solutions. Key Responsibilities: Minimum 8+ years of experience in big data development using PySpark within the banking/financial sector. Expertise in designing, developing, and optimizing large-scale data processing applications to ensure performance and efficiency. In-depth proficiency with PySpark and the Apache Spark ecosystem for distributed data processing. Strong programming skills in Python with a focus on PySpark. Comprehensive understanding of Hadoop architecture, Hive, and HDFS for data storage and retrieval. Advanced proficiency in SQL development, including query optimization and performance tuning for high-volume data processing. This role is a great fit for someone who thrives in banking and financial environments, handling complex data pipelines and optimizing large-scale big data applications. “Mindlance is an Equal Opportunity Employer and does not discriminate in employment on the basis of - Minority/Gender/Disability/Religion/LGBTQI/Age/Veterans.”
    $89k-120k yearly est. 13d ago
  • Data Science Engineer - Machine Learning

    Robotics Technologies LLC 4.1company rating

    Palo Alto, CA Jobs

    W-2 Open Positions Need to be Filled Immediately. Consultant must be on our company payroll, Corp-to-Corp (C2C) is not allowed. Candidates encouraged to apply directly using this portal. We do not accept resumes from other company/ third-party recruiters. Job Overview Job ID: J36993 Specialized Area: Data Science Job Title: Data Science Engineer - Machine Learning Location: To Be Discussed Later Duration: 7 Months Domain Exposure: Healthcare, Insurance, IT/Software Work Authorization: W-2 (Consultant must be on our company payroll. C2C is not allowed) Responsibilities: Help drive data science projects from concept, development, and deployment to production. Propose statistical or machine learning based model/methodology to solve the problem. Propose accuracy measures and validation criteria for the model. Implement and evaluate proposed model/methodology. Provide deep technical guidance and mentorship to the entire team. Work with product managers to formulate the ML product vision and design. Qualifications: Masters or Ph.D. in Computer Science, applied mathematics, or a related engineering discipline is essential. 4+ years building data powered products. Strong background in machine learning, statistics, and programming. Proven track record of analyzing large-scale complex data sets, modeling and machine learning algorithms. Expertise in at least one high-level programming language like Java, Scala, C++ or Python (NumPy, SciPy, Pandas). Expertise in at least one statistical modeling tool from among R, Matlab or Weka, is a plus. ROBOTICS TECHNOLOGIES LLC is an equal opportunity employer inclusive of female, minority, disability and veterans, (M/F/D/V). Hiring, promotion, transfer, compensation, benefits, discipline, termination and all other employment decisions are made without regard to race, color, religion, sex, sexual orientation, gender identity, age, disability, national origin, citizenship/immigration status, veteran status or any other protected status. ROBOTICS TECHNOLOGIES LLC will not make any posting or employment decision that does not comply with applicable laws relating to labor and employment, equal opportunity, employment eligibility requirements or related matters. Nor will ROBOTICS TECHNOLOGIES LLC require in a posting or otherwise U.S. citizenship or lawful permanent residency in the U.S. as a condition of employment except as necessary to comply with law, regulation, executive order, or federal, state, or local government contract. #J-18808-Ljbffr
    $126k-177k yearly est. 5d ago
  • Console Gaming Engineer (SOC/RTOS/Bare Metal) [77713]

    Onward Search 4.0company rating

    Camas, WA Jobs

    Looking for 7+ years of experience Console Gaming Engineering team Job Description Must have - C programming - SOC development with RTOS or bare metal - Bluetooth audio profiles (HFP / A2DP) or application development Nice to have: - ThreadX or FreeRTOS - USBX - SAI / I2S - STM32 Cube IDE - Audio Kit - TouchGFX
    $86k-130k yearly est. 2d ago

Learn More About POWER Engineers Jobs

View All Jobs