Post job

Data Engineer jobs at B & P Enterprises

- 297 jobs
  • Data Scientist with Hands On development experience with R, SQL & Python

    Central Point Partners 3.7company rating

    Columbus, OH jobs

    *Per the client, No C2C's!* Central Point Partners is currently interviewing candidates in the Columbus, Oh area for a large client. only GC's and USC's. This position is Hybrid (4 Days onsite)! Only candidates who are local to Columbus, Oh will be considered. Data Scientist with Hands On development experience with R, SQL & Python Summary: Our client is seeking a passionate, data-savvy Senior Data Scientist to join the Enterprise Analytics team to fuel our mission of growth through data-driven insights and opportunity discovery. This dynamic role uses a consultative approach with the business segments to dive into our customer, product, channel, and digital data to uncover opportunities for consumer experience optimization and customer value delivery. You will also enable stakeholders with actionable, intuitive performance insights that provide the business with direction for growth. The ideal candidate will have a robust mix of technical and communication skills, with a passion for optimization, data storytelling, and data visualization. You will collaborate with a centralized team of data scientists as well as teams across the organization including Product, Marketing, Data, Finance, and senior leadership. This is an exciting opportunity to be a key influencer to the company's strategic decisions and to learn and grow with our Analytics team. Notes from the manager The skills that will be critical will be Python or R and a firm understanding of SQL along with foundationally understanding what data is needed to perform studies now and in the future. For a high-level summary that should help describe what this person will be asked to do alongside their peers: I would say this person will balance analysis with development, knowing when to jump in and knowing when to step back to lend their expertise. Feature & Functional Design Data scientists are embedded in the team's designing the feature. Their main job here is to define the data tracking needed to evaluate the business case-things like event logging, Adobe tagging, third-party data ingestion, and any other tracking requirements. They are also meant to consult and outline if/when business should be bringing data into the bank and will help connect business with CDAO and IT warehousing and data engineering partners should new data need to be brought forward. Feature Engineering & Development The same data scientists stay involved as the feature moves into execution. They support all necessary functions (Amigo, QA, etc.) to ensure data tracking is in place when the feature goes live. They also begin preparing to support launch evaluation and measurement against experimentation design or business case success criteria. Feature Rollout & Performance Evaluation Owns tracking the rollout, running A/B tests, and conducting impact analysis for all features that they have been involved in the Feature & Functional Design and Feature Engineering & Development stages. They provide an unbiased view of how the feature performs against the original business case along with making objective recommendations that will provide direction for business. They will roll off once the feature has matured through business case/experiment design and evaluation. In addition to supporting feature rollouts… Data scientists on the team are also encouraged to pursue self-driven initiatives during periods when they are not actively supporting other projects. These initiatives may include designing experiments, conducting exploratory analyses, developing predictive models, or identifying new opportunities for impact. For more information about this opportunity, please contact Bill Hart at ************ AND email your resume to **********************************!
    $58k-73k yearly est. 22h ago
  • Engineer

    Feditc 4.1company rating

    Texarkana, TX jobs

    FEDITC, LLC is a fast-growing business supporting DoD and other intelligence agencies worldwide. FEDITC develops mission critical national security systems throughout the world directly supporting the Warfighter, DoD Leadership, & the country. We are proud & honored to provide these services. Overview of position: We are looking for an Engineer to work in the Texarkana area. The Engineer will play a key role in supporting depot maintenance and production operations in Texarkana, focusing on the design, development, and improvement of complex equipment and tooling used in the overhaul, repair, modification, and upgrade of both wheeled and tracked military vehicles. This position requires a highly skilled engineer capable of performing original design studies, developing innovative solutions for specialized vehicle systems-including hulls, suspensions, engines, transmissions, and electronic components-and integrating advanced automation technologies such as robotics and machine vision into depot operations. The Engineer will also oversee the fabrication, assembly, and implementation of production and test equipment, ensure proper function and efficiency, and provide training and technical support to operational personnel. An active NACI and a United States Citizenship is required to be considered for this position. Responsibilities Perform original design studies related to the concept and design of equipment, fixtures, and tooling to support primary vehicle systems and their components, including: Hulls, chassis, suspensions, turrets, armament, engines, transmissions, final drives, fire control instruments, electronic components, hydraulic components, and auxiliary equipment. Provide complex independent support for the depot mission in the conceptual design, improvement, and installation of mission production equipment, associated facilities, methods, and procedures to predict, evaluate, and specify results. Monitor technological developments of equipment used in both private industry and government operations. Review mission overhaul, repair, modification, and upgrade programs to ensure present systems and methods perform required functions in the most economical manner. Design complete and complex production and test equipment for the depot maintenance program. Oversee the purchase and fabrication of equipment, fixtures, and tools-many of which are unique due to specialized requirements for tracked and wheeled vehicles and artillery maintenance operations not found commercially or within existing designs. Incorporate flexible automation such as robotics and machine vision technology into design efforts. Oversee assembly and ensure proper operation/function of equipment. Demonstrate, train, and release equipment to operating shop personnel. Experience/Skills: 5-10 years of relevant engineering experience required. Strong knowledge of mechanical design principles, manufacturing processes, and automation technologies. Experience with production or test equipment design for vehicle systems is highly desirable. Ability to manage multiple design and implementation projects simultaneously. Clearance: Active NACI Clearance is required. Must be a United States Citizen and pass a background check. Maintain applicable security clearance(s) at the level required by the client and/or applicable certification(s) as requested by FEDITC and/or required by FEDITC'S Client(s)/Customer(s)/Prime contractor(s). FEDITC, LLC. is committed to fostering an inclusive workplace and provides equal employment opportunities (EEO) to all employees and applicants for employment. We do not employ AI tools in our decision-making processes. Regardless of race, color, religion, sex (including pregnancy), sexual orientation, gender identity or expression, national origin, age, disability, genetic information, marital status, amnesty, or status as a covered veteran, FEDITC, LLC. ensures that all employment decisions are made in accordance with applicable federal, state, and local laws. Our commitment to non-discrimination in employment extends to every location in which our company operates.
    $73k-100k yearly est. 22h ago
  • Cloud Engineer

    Trinity Consultants 4.5company rating

    Dallas, TX jobs

    We are seeking a skilled and motivated Cloud Engineer to join our team. The ideal candidate will have strong expertise in Microsoft Azure, cloud infrastructure, CI/CD pipelines, automation, and cloud migration. You will be responsible for designing, implementing, and managing scalable cloud solutions, ensuring high availability, security, and performance of applications and services, while leading migration initiatives from on-premises or other cloud platforms to Azure. Key Responsibilities Cloud Infrastructure & Migration Design, deploy, and manage cloud-based infrastructure on Microsoft Azure. Lead cloud migration projects from on-premises environments or other cloud providers to Azure. Assess current infrastructure, create migration strategies, and execute step-by-step migration plans. Optimize migrated workloads for scalability, cost efficiency, and performance. Ensure compliance, security, and minimal downtime during migration. DevOps Engineering Build, manage, and optimize CI/CD pipelines using Azure DevOps. Automate application deployment, configuration management, and infrastructure provisioning. Collaborate with developers to streamline code releases and software delivery. Monitoring & Operations Set up monitoring, logging, and alerting solutions (e.g., Azure Monitor, Application Insights, Log Analytics). Manage system performance, troubleshoot issues, and ensure uptime of production workloads. Implement disaster recovery and backup strategies. Security & Governance Apply security best practices, including identity and access management (IAM) via Azure Active Directory. Enforce compliance policies and governance frameworks. Manage secrets and credentials securely with Azure Key Vault. Collaboration & Support Work closely with development, QA, and operations teams to ensure smooth delivery of projects. Provide technical support and guidance on cloud and DevOps practices. Mentor junior engineers and promote DevOps culture across teams. Required Qualifications Bachelor's degree in computer science, Engineering, or related field (or equivalent experience). 7 + plus years of experience required. Proven experience with Microsoft Azure cloud services (VMs, App Services, AKS, Storage, Networking). Strong background in cloud migration projects (on-premises to Azure or cross-cloud). Hands-on experience with CI/CD pipelines in Azure DevOps. Knowledge of Infrastructure-as-Code (IaC) tools (Terraform, ARM, Bicep). Proficiency in scripting languages (PowerShell, Bash, or Python). Experience with containerization and orchestration (Docker, Kubernetes/AKS). Familiarity with monitoring and logging tools (Azure Monitor, Prometheus, Grafana, etc.). Understanding of networking, security, and governance in cloud environments. Preferred Skills Certifications: Microsoft Certified: Azure Administrator/DevOps Engineer Expert/Solutions Architect. Experience in large-scale data center or enterprise application migrations. Knowledge of automation/configuration management tools (Ansible, Chef, Puppet). Familiarity with Git, GitHub Actions, or Jenkins. Exposure to agile methodologies and DevOps practices. Soft Skills Strong problem-solving and troubleshooting abilities. Excellent communication and collaboration skills. Ability to work in a fast-paced, dynamic environment. Continuous learning mindset with passion for automation, innovation, and modernization.
    $67k-92k yearly est. 22h ago
  • AI (Python) Engineer

    Trinity Consultants 4.5company rating

    Dallas, TX jobs

    Department: Information Technology Reports To: Manager of Application Development is exempt from overtime ROLE AI (Python) Engineer Overview We are seeking an AI Engineer with proven experience in building and scaling AI-powered applications. This role combines hands-on development with AI research and training responsibilities. The ideal candidate will design, prototype, and productionize AI agents, integrate them into enterprise workflows, and publish training resources to help teams adopt and apply AI effectively. You will work across front-end, back-end, and AI services, leveraging frameworks such as React.js, FastAPI, and Azure AI Foundry, while exploring the latest Generative AI, RAG, and agentic frameworks. Key Responsibilities AI Development Design, prototype, and productionize AI agents capable of intelligent communication, information retrieval, and task execution. Develop and optimize high-performance code for LLM-based and Agentic AI models. Drive the roadmap for Generative AI use cases, ensuring scalability and real-world impact. Collaboration with Teams Partner with transformation, engineering, and business teams to tailor AI agents for domain-specific workflows. Work with AI platform teams to ensure robust infrastructure, observability, and monitoring. Integration & Deployment Lead and contribute to AI projects from POC to production. Deploy, monitor, and scale AI solutions on Azure, AWS, or GCP, using containerized environments (Docker, Kubernetes). Integrate AI pipelines into front-end and back-end applications. Continuous Learning & Optimization Stay current with cutting-edge AI research (Generative AI, RAG, agentic frameworks). Perform testing, validation, and performance tuning for compliance and reliability. Research and publish internal training resources, workshops, and documentation to upskill teams. Technologies & Tools LLMs: OpenAI, LLaMA, Mistral, Gemini, Claude, Grok Agentic Frameworks: Langchain, CrewAI, A2A, LLaMAIndex, RAG pipelines Programming & Frameworks: Python, FastAPI, SQL, CosmosDB, Flask, Streamlit, Chainlit Frontend: React, Angular Cloud & Deployment: Azure Faundry, Azure OpenAI, Docker, Kubernetes, VectorDBs Other Tools: Hugging Face Transformers, TensorFlow, Cognitive Services Qualifications Education Bachelor's degree in computer science, Data Science, AI/ML, or related field (required). Master's degree (preferred). Experience 2-3 years of hands-on experience in developing, deploying, and scaling AI/ML solutions. Proven success with Generative AI, NLP/NLU, and AI agent frameworks. Experience leading AI-focused projects and collaborating with business & technical stakeholders. Technical Skills Strong proficiency in Python and SQL. Familiarity with front-end technologies (React, Angular). Experience with containerization and cloud services (Azure/AWS/GCP). Solid understanding of prompt engineering, LLM fine-tuning, and RAG techniques. Soft Skills Clear communication - able to explain complex AI concepts to technical and non-technical audiences. Collaboration & teamwork - works well across engineering, business, and research teams. Adaptability - thrives in fast-moving environments with evolving AI tools. Curiosity & learning mindset - passionate about exploring and teaching new AI capabilities. Problem-solving & critical thinking - proactive in diagnosing issues and designing innovative solutions. Knowledge sharing - ability to research and publish training materials to foster AI adoption across the organization.
    $67k-92k yearly est. 2d ago
  • Associate Fraud Strategy Data Scientist (Hybrid)

    Together We Talent 3.8company rating

    San Jose, CA jobs

    Associate Fraud Strategy Data Scientist San Jose, CA (Hybrid) | Contract | $50/hour Duration: 1 -year contract (to cover multiple leaves), with possible extension based on performance Analyze fraud patterns, build predictive models, and drive risk mitigation strategies at a fast -paced fintech consultancy A leading consultancy firm supporting a fast -growing fintech client is hiring a contract Associate Fraud Strategy Data Scientist to help fight fraud at scale. This is a hybrid role based in the San Jose area, ideal for a mid -level data scientist with experience in fraud, payments, or eCommerce. The ideal candidate is a curious, impact -driven professional who can dive into large datasets, design fraud detection strategies, and clearly communicate data -driven insights to technical and non -technical teams. Position Overview In this role, you'll partner with the Fraud Risk Strategy team to design and refine fraud detection rules, support strategy development with data science models, and surface actionable insights using SQL, Python, Tableau, and large -scale datasets. You'll also work cross -functionally with product and engineering to improve fraud mitigation capabilities and customer experience. Key Responsibilities Design fraud detection and mitigation rules Build Python scripts and data science models to support risk strategies Analyze large datasets to identify fraud patterns and root causes Collaborate with engineering and product teams to strengthen fraud controls Develop dashboards and data visualizations using Tableau Guide execution of fraud strategy roadmaps Present findings and recommendations to leadership and cross -functional stakeholders RequirementsRequired Qualifications Bachelor's degree in Data Analytics, Data Science, Statistics, Mathematics, or related field 2 years max of professional experience in risk analytics, fraud detection, or online payments Advanced SQL skills and proficiency in Python (plus data science libraries) Strong experience with Tableau or similar data visualization tools Experience working with large datasets and deriving actionable insights Ability to communicate findings clearly across stakeholders and teams Preferred Skills & Bonus Experience AWS, Quicksight, or cloud -based analytics platforms Experience working with fraud rule systems or ML models Understanding of fraud typologies or abuse detection Experience supporting investigations or product abuse cases Prior exposure to eCommerce, fintech, or online marketplaces Expected Outcomes (6-12 Months) Collaborate on new fraud strategies based on emerging threats Deliver dashboards tracking KPIs and fraud loss metrics Deploy data -backed solutions that improve fraud controls while enhancing customer experience Support risk mitigation efforts that reduce financial losses across the platform
    $50 hourly 43d ago
  • Senior Data Engineer, 1

    People Inc. 3.0company rating

    New York, NY jobs

    People Inc. is a leading digital media company that owns and operates a portfolio of highly respected brands across various verticals, including lifestyle, health, finance, and more. With a commitment to providing high-quality content and innovative digital experiences, People Inc. reaches millions of users globally and continues to drive growth and engagement across its platforms. People Inc. is looking for a Senior Data Engineer with strong Python and SQL skills to join the Data Operations team. The successful candidate will help build our data integration pipelines that feed into our data lakes and warehouses while maintaining data quality and integrity in our data stores. We are looking for someone who is a great team player but can also work independently. This person will also work closely with key stakeholders and understand and implement business requirements and see to it that data deliverables are met. Remote or Hybrid 3x a week- New York City In-office Expectations: This position offers remote work flexibility; however, if you reside within a commutable distance to one of our offices in New York, Des Moines, Birmingham, Los Angeles, or Chicago, the expectation is to work from the office three times per week. About The Positions Contributions: Weight % Accountabilities, Actions and Expected Measurable Results 60% You will enhance our systems by building new data integration pipelines and adding new data to our data lakes and warehouses while continuously optimizing them. You will work with internal team members as well as stakeholders to scope out business requirements and see data deliverables through to the end where they will be used via our Looker platform. You will continuously look for ways to improve our data transformations and data consumption processes so that our systems are running efficiently, and our customers are able to use and analyze our data quickly and effectively. 40% You will champion coding standards and best practices by actively participating in code reviews, and working to improve our internal tools and build process. You will work to ensure the security and stability of our infrastructure in a multi-cloud environment. You will collaborate with our Analytics engineers to ensure data integrity and the quality of our data deliverables. The Role's Minimum Qualifications and Job Requirements Education: Degree in a quantitative field, such as computer science, statistics, mathematics, engineering, data science, or equivalent experience. Experience: A minimum of 5+ years of experience in building and optimizing data pipelines with Python. You have experience writing complex SQL queries to analyze data. Window functions and nested subqueries are second nature to you. You have commendable experience with at least one cloud service platform (GCP and AWS preferred). You've worked with data at scale using Apache Spark, Beam or a similar framework. You're familiar with data streaming architectures using technologies like Pub/Sub and Apache Kafka. You are eager to learn about new tech stacks, big data technologies, data pipelining architectures, etc. and propose your findings to the team to try and optimize our systems. Specific Knowledge, Skills, Certifications and Abilities: Strong Python and SQL skills. Experience with Google Cloud Platform is a plus. It is the policy of People Inc. to provide equal employment opportunity (EEO) to all persons regardless of age, color, national origin, citizenship status, physical or mental disability, race, religion, creed, gender, sex, sexual orientation, gender identity and/or expression, genetic information, marital status, status with regard to public assistance, veteran status, or any other characteristic protected by federal, state or local law. In addition, the Company will provide reasonable accommodations for qualified individuals with disabilities. Accommodation requests can be made by emailing *************. The Company participates in the federal E-Verify program to confirm the identity and employment authorization of all newly hired employees. For further information about the E-Verify program, please click here: ********************************** Pay Range Salary: New York: $170,000 - $180,000 Remote: $160,000 - $170,000 The pay range above represents the anticipated low and high end of the pay range for this position and may change in the future. Actual pay may vary and may be above or below the range based on various factors including but not limited to work location, experience, and performance. The range listed is just one component of People Inc's total compensation package for employees. Other compensation may include annual bonuses, and short- and long-term incentives. In addition, People Inc. provides to employees (and their eligible family members) a variety of benefits, including medical, dental, vision, prescription drug coverage, unlimited paid time off (PTO), adoption or surrogate assistance, donation matching, tuition reimbursement, basic life insurance, basic accidental death & dismemberment, supplemental life insurance, supplemental accident insurance, commuter benefits, short term and long term disability, health savings and flexible spending accounts, family care benefits, a generous 401K savings plan with a company match program, 10-12 paid holidays annually, and generous paid parental leave (birthing and non-birthing parents), all of which may vary depending on the specific nature of your employment with People Inc. and your work location. We also offer voluntary benefits such as pet insurance, accident, critical and hospital indemnity health insurance coverage, life and disability insurance. #CORP#
    $170k-180k yearly Auto-Apply 29d ago
  • Senior Data Engineer, 1

    People Inc. 3.0company rating

    New York, NY jobs

    | Major goals and objectives and location requirements People Inc. is a leading digital media company that owns and operates a portfolio of highly respected brands across various verticals, including lifestyle, health, finance, and more. With a commitment to providing high-quality content and innovative digital experiences, People Inc. reaches millions of users globally and continues to drive growth and engagement across its platforms. People Inc. is looking for a Senior Data Engineer with strong Python and SQL skills to join the Data Operations team. The successful candidate will help build our data integration pipelines that feed into our data lakes and warehouses while maintaining data quality and integrity in our data stores. We are looking for someone who is a great team player but can also work independently. This person will also work closely with key stakeholders and understand and implement business requirements and see to it that data deliverables are met. Remote or Hybrid 3x a month In-office Expectations: This position offers remote work flexibility; however, if you reside within a commutable distance to one of our offices in New York, Des Moines, Birmingham, Los Angeles, Chicago, or Seattle, the expectation is to work from the office three times per month About The Positions Contributions: Weight % Accountabilities, Actions and Expected Measurable Results 60% You will enhance our systems by building new data integration pipelines and adding new data to our data lakes and warehouses while continuously optimizing them. You will work with internal team members as well as stakeholders to scope out business requirements and see data deliverables through to the end where they will be used via our Looker platform. You will continuously look for ways to improve our data transformations and data consumption processes so that our systems are running efficiently, and our customers are able to use and analyze our data quickly and effectively. 40% You will champion coding standards and best practices by actively participating in code reviews, and working to improve our internal tools and build process. You will work to ensure the security and stability of our infrastructure in a multi-cloud environment. You will collaborate with our Analytics engineers to ensure data integrity and the quality of our data deliverables. The Role's Minimum Qualifications and Job Requirements Education: Degree in a quantitative field, such as computer science, statistics, mathematics, engineering, data science, or equivalent experience. Experience: A minimum of 3+ years of experience in building and optimizing data pipelines with Python. You have experience writing complex SQL queries to analyze data. You have commendable experience with at least one cloud service platform (GCP and AWS preferred). You've worked with data at scale using Apache Spark, Beam or a similar framework. You're familiar with data streaming architectures using technologies like Pub/Sub and Apache Kafka. You are eager to learn about new tech stacks, big data technologies, data pipelining architectures, etc. and propose your findings to the team to try and optimize our systems. Specific Knowledge, Skills, Certifications and Abilities: Strong Python and SQL skills. Experience with Google Cloud Platform is a plus. % Travel Required (Approximate) : 0% It is the policy of People Inc. to provide equal employment opportunity (EEO) to all persons regardless of age, color, national origin, citizenship status, physical or mental disability, race, religion, creed, gender, sex, sexual orientation, gender identity and/or expression, genetic information, marital status, status with regard to public assistance, veteran status, or any other characteristic protected by federal, state or local law. In addition, the Company will provide reasonable accommodations for qualified individuals with disabilities. Accommodation requests can be made by emailing *************. The Company participates in the federal E-Verify program to confirm the identity and employment authorization of all newly hired employees. For further information about the E-Verify program, please click here: ********************************** Pay Range Salary: New York: $140,000.00 - $170,000.00 The pay range above represents the anticipated low and high end of the pay range for this position and may change in the future. Actual pay may vary and may be above or below the range based on various factors including but not limited to work location, experience, and performance. The range listed is just one component of People Inc's total compensation package for employees. Other compensation may include annual bonuses, and short- and long-term incentives. In addition, People Inc. provides to employees (and their eligible family members) a variety of benefits, including medical, dental, vision, prescription drug coverage, unlimited paid time off (PTO), adoption or surrogate assistance, donation matching, tuition reimbursement, basic life insurance, basic accidental death & dismemberment, supplemental life insurance, supplemental accident insurance, commuter benefits, short term and long term disability, health savings and flexible spending accounts, family care benefits, a generous 401K savings plan with a company match program, 10-12 paid holidays annually, and generous paid parental leave (birthing and non-birthing parents), all of which may vary depending on the specific nature of your employment with People Inc. and your work location. We also offer voluntary benefits such as pet insurance, accident, critical and hospital indemnity health insurance coverage, life and disability insurance. #NMG#
    $140k-170k yearly Auto-Apply 60d+ ago
  • Data Scientist (Hybrid/U.S. Citizens Only)

    Task Force Talent 3.8company rating

    Tysons Corner, VA jobs

    Job Description Task Force Talent is seeking data scientists for a very well-funded Series C company working on insider threat and supply chain security problems. Target salary range is 150k to 200k+, plus equity, depending on experience level and location. Please note that although this is a data science position, the company is looking for skill sets much closer to software engineering. This is NOT a role focused on just on statistical analysis or building dashboards. Candidates will have to pass technical interviews similar to software engineering interviews. The company is profitable and growing fast with approximately 150+ employees. Positions are available in Tysons Corner, VA, and Salt Lake City, UT, with a hybrid (typically 3 days/week in the office) schedule; however, those hours are flexible to accommodate family/childcare and traffic as the goal of in-office hours is to know your team better. Benefits Company Equity Options and 401(k) Plan Unlimited PTO and Wellness Reimbursement U.S. Holidays Paid Parental Leave Comprehensive Insurance (Medical, Dental, and Vision) This company is completely private sector, no security clearance required; however, employment is open to U.S. citizens only at this time. ____________________________________________________________________________________________________________________________________________ Qualifications MUST be a U.S. citizen (no permanent residents, no visa sponsorship); while no clearance is required, candidates must be clearance-eligible. AT LEAST 3+ years (ideally 5+ to be most competitive) of experience in data science and/or analytics MS or BS in an engineering (preferred) or quantitative field (Science, Economics, Statistics) with a background in programming. Strong proficiency in Python, SQL, R or comparable languages Experience with large-scale data analysis and statistical modeling Ability to transform complex data into actionable insights Strong project management and prioritization skills Experience with Elasticsearch preferred Proven ability to work independently and as part of a team Strong communication skills for technical and non-technical audiences BONUS: Foreign language fluency, particularly languages associated with threat actors. ____________________________________________________________________________________________________________________________________________ Interview Process This company typically has a phone screen, followed by a coding exercise, and then several in-person interviews. They usually move fast -- introduction to offer within two to three weeks. About Us Task Force Talent is a specialized recruiting firm for science, engineering, and security careers. Our clients include seed to Series C startups working on AI, cybersecurity, quantum computing, and other novel technologies. We also work with small to medium-sized government contractors, and we help leading venture capital firms find talent for their portfolio companies. We have hundreds of jobs available and consider all applicants for all roles, now and in the future. Our goal is to find the best fit for you! ____________________________________________________________________________________________________________________________________________ Not your dream job, but perfect for a friend? You can submit a referral and get a check for $2000 or more: ***************************************** (Terms and conditions apply.) If you don't see the perfect fit, simply use our general application at: ****************************************************************************************
    $76k-110k yearly est. 20d ago
  • Data Engineer Level 3 - Frontend Development

    Metropolitan Transportation Commission 4.6company rating

    New York, NY jobs

    at MTA Headquarters JOB TITLE: Data Engineer, Level 3 DEPT/DIV: Chief Strategic Initiatives Officer/ Data & Analytics WORK LOCATION: 2 Broadway FULL/PART-TIME FULL SALARY RANGE: $110,000 - $129,780 DEADLINE: Until filled This position is eligible for teleworking, which is currently one day per week. New hires are eligible to apply 30 days after their effective hire date. Opening: The Metropolitan Transportation Authority is North America's largest transportation network, serving a population of 15.3 million people across a 5,000-square-mile travel area surrounding New York City, Long Island, southeastern New York State, and Connecticut. The MTA network comprises the nation's largest bus fleet and more subway and commuter rail cars than all other U.S. transit systems combined. MTA strives to provide a safe and reliable commute, excellent customer service, and rewarding opportunities. Position Objective: As a member of the Data & Analytics team, you will work on a dynamic, growing team tasked with helping to define and achieve the MTA's strategic priorities and address its most pressing challenges. You will get the experience of seeing your work become public data and have it influence critical decisions that affect millions of New Yorkers. Our work combines new approaches and algorithms with a deep knowledge of our operations to deliver deeper insights into our performance. Our outputs are used by a wide array of people: from the MTA Board and senior execs to front-line managers and our open data customers. If you would like to use your programming skills to work on complex problems to improve public transit in NYC, then we'd like to talk to you. The incumbent will help lead the redevelopment of metrics.mta.info, the MTA's external-facing data visualization site built off of open data, as well as various other internal-facing data tools used for real-time system analysis. This role combines front-end engineering expertise with a strong understanding of data visualization best practices and user experience design to deliver intuitive, sustainable, and performant tools that help teams across the MTA access, understand, and make decisions on data. They will use modern front-end technologies and frameworks (e.g., React, Svelte, D3.js) to build responsive and maintainable interfaces. They will also be expected to work fluently with back-end engineers and data systems - using tools like SQL and Python - to ensure seamless integration between front-end applications and the underlying data pipelines. They will carefully document all work and work closely with colleagues to define needs, problem-solve, support the overall team agenda, and build relationships throughout and at all levels of the agency. They will have to be able to quickly learn the unique features, data constraints and business needs of any part of the MTA. In general, they will have to support the MTA's strategic goals to build data systems and processes that are well-structured and sustainable. Responsibilities: Lead the design and development of a new metrics.mta.info and other internal-facing data tools, ensuring usability, performance, and clarity. Collaborate closely with data engineers, analysts, and product stakeholders to understand requirements, source data, and align front-end tools with upstream data pipelines. Support MTA's open data and transparency goals by translating complex data concepts into intuitive visualizations, dashboards, and interfaces that support decision-making across the agency, and awareness of key MTA issues and progress with customers. Proactively identify and resolve UI/UX issues, technical debt, and performance issues on visualization products Use the latest data technologies and cloud platforms to collect, integrate, ingest, transform, and organize data and information to support data analyses, reporting, dashboarding and modern analytics across the agency to standardize, simplify, flexibility, and reuse. Research and recommend the best emerging technologies, toolsets, applications, and systems to improve efficiency and efficacy. In consultation with managers in the Data & Analytics function, help lead project planning, help carry out project management, and communicate with clients and all relevant parties on project status. Support the selection and development of teammates within the department. Support a professional environment that respects individual differences, enables all colleagues to develop and contribute to their full potential and to achieve career goals at the MTA and beyond Perform other duties as assigned. Required Knowledge/Skills/Abilities: Strong skills in front-end development using modern frameworks such as React or Svelte; experience with TypeScript and component-based design preferred. Experience managing the hosting and deployment of high-performance front-end applications. Experience with DevOps tools (e.g., Github Actions, Argo, etc.) and familiarity with related concepts (e.g., CI/CD). Strong sense of effective design and visual communication through data. Ability to work and communicate with data scientists, including understanding basic data science concepts, statistics, and common languages such as Python. Strong skills in programming, database design, and data lake architectures for data engineering. Ability to play a lead role on the Data & Analytics team to set priorities and overall program planning in support of business goals. Ability to work with data of different types - structured, semi-structured, unstructured - as well as from distinct disciplines - transportation, finance, HR, asset maintenance. Knowledge of transit/ transportation systems and excellent judgment on how to manage or overcome technical, organizational, or governance constraints to meet business goals. Familiarity with common algorithms used to calculate KPIs and extensive experience with data visualization and business intelligence tools such as Power BI. Experience with data engineering orchestration tools (e.g., Airflow, Dagster, etc.) and DevOps tools (e.g., ADO, Git, Jenkins, Docker, etc.) and familiarity with related concepts (e.g., CI/CD). Extensive experience and ability to design and implement quality assurance and automated testing systems. Exceptional ability to read code and understand technical issues, keep up with technical innovation and trends in data engineering. Ability to collaborate and provide support to all levels of MTA, both technical and non-technical. Ability to deconstruct difficult problems into smaller and simpler pieces. Extensive experience in project design, strong project management skills, and the ability to lead successful team-based projects. Proactive interest in identifying strategic, policy, and business issues in all proposed and ongoing projects. Strong written communication skills for both non-technical and technical documents. Ability to present and engage on complex work products with executive audiences and to review and make decisions on project design options with clients and stakeholders. Required Education and Experience: Bachelor's degree in Computer Science, Information Technology, Engineering, Mathematics, or related field. An equivalent combination of education and experience may be substituted in lieu of a degree. A minimum of (4) years of experience working in a front-end development or another position with similar programing and data visualization content. A minimum of (3) years of experience working in a data engineering or another position with similar programming and dataset design content. A minimum of (3) years of experience building pipelines, automating tasks through scripts, writing database queries, and debugging/ maintaining code. A minimum of 4 years) experience with both Python and SQL programming. A minimum of (4 years) experience with relational databases (e.g., Oracle, Postgres, SQL Server, NoSQL), including writing queries (generally with PL/SQL) to obtain and manipulate data. The Following is/are preferred: Master's degree in Computer Science, Information Technology, Engineering, Mathematics, or a related field. Other Information May need to work outside of normal work hours (i.e., evenings and weekends) Travel may be required to other MTA locations or other external sites. According to the New York State Public Officers Law & the MTA Code of Ethics, all employees who hold a policymaking position must file an Annual Statement of Financial Disclosure (FDS) with the NYS Commission on Ethics and Lobbying in Government (the “Commission”). Equal Employment Opportunity MTA and its subsidiary and affiliated agencies are Equal Opportunity Employers, including those concerning veteran status and individuals with disabilities. The MTA encourages qualified applicants from diverse backgrounds, experiences, and abilities, including military service members, to apply.
    $110k-129.8k yearly Auto-Apply 57d ago
  • Data Scientist - Property/Casualty

    Farm Bureau Financial Services 4.5company rating

    West Des Moines, IA jobs

    Farm Bureau is looking for a strong data driven professional to join our team and help us live out our mission of protecting livelihoods and futures. In this role, you will perform statistical analysis as part of the predictive modeling team and assist in the delivery of pricing models and competitive research. Candidate must be local to the Des Moines area, or willing to relocate as at least three days in the office is required per week. Candidates must also have predictive modeling or actuarial experience. Who We Are: At Farm Bureau Financial Services, we make insurance simple so our client/members can feel confident knowing their family, home, cars and other property are protected. We value a culture where integrity, teamwork, passion, service, leadership and accountability are at the heart of every decision we make and every action we take. We're proud of our more than 80-year commitment to protecting the livelihoods and futures of our client/members and creating an atmosphere where our employees thrive. What You'll Do: * Develop, implement, and document predictive models * Interpret results to audiences of various technical knowledge * Test, validate, and correct / clean data as needed * Collaborate with managing actuaries on the pricing teams to deliver results What It Takes to Join Our Team: * College degree plus 3 years predictive modeling experience required. * Property Casualty experience preferred. * Must have strong skills with SQL, R, Emblem or other predictive modeling software. * Be team focused and be able to work in a collaborative work environment. * Strong communication and presentation skills. What We Offer You: When you're on our team, you get more than a great paycheck. You'll hear about career development and educational opportunities. We offer an enhanced 401K with a match, low-cost health, dental, and vision benefits, and life and disability insurance options. We also offer paid time off, including holidays and volunteer time, and teams who know how to have fun. Add to that an onsite wellness facility with fitness classes and programs, a daycare center, a cafeteria, and for many positions, even consideration for a hybrid work arrangement. Farm Bureau....where the grass really IS greener! Work Authorization/Sponsorship Applicants must be currently authorized to work in the United States on a full-time basis. We are not able to sponsor now or in the future, or take over sponsorship of, an employment visa or work authorization for this role. For example, we are not able to sponsor OPT status.
    $65k-89k yearly est. 60d+ ago
  • Educator Preparation Data Scientist

    CSU 3.8company rating

    Sacramento, CA jobs

    Chancellor's Office Statement Join our team at the California State University, Office of the Chancellor, and make a difference in providing access to higher education. We are currently seeking experienced candidates for the position of Educator Preparation Data Scientist. The CSU Chancellor's Office, located on the waterfront adjacent to the Aquarium of the Pacific in downtown Long Beach, is the headquarters for the nation's largest and most diverse system of higher education. The CSU Chancellor's Office offers a premium benefit package that includes outstanding vacation, health, and dental plans; a fee waiver education program; membership in the California Public Employees Retirement System (PERS); and 15 paid holidays a year. Salary The anticipated salary hiring range is up to $104,136 per year, commensurate with qualifications and experience. Classification Administrator II Position Information The California State University, Office of the Chancellor, is seeking an Educator Preparation Data Scientist to be responsible for managing, developing, and maintaining the Center's educator preparation data systems and dashboards. The Data Scientist performs strategic data analyses to inform continuous improvement efforts in CSU's educator preparation programs and supports campus data needs in ways that enhance the system's ability to recruit, prepare, develop, and retain outstanding teachers for California schools. The data initiatives overseen in this role directly advance the Chancellor's Office strategic plan, "CSU Forward: Thriving Students, Thriving University, Thriving California” by providing actionable evidence about CSU educator preparation programs. These initiatives support: -Public transparency and informed strategic planning and policy development; -Campus accreditation and compliance with state and federal requirements; -Evaluation of state, federal, and privately funded grants and initiatives; -Systemwide accountability, including the establishment of improvement goals and priorities; -Effective communication of the impact and value of university-based educator preparation programs; and -Statewide policy discussions related to educator preparation reform. THIS POSITION IS LOCATED AT THE CSU SACRAMENTO CAMPUS. This position is approved for telecommuting (two days telecommuting, three days in office (in-person)) with onsite work at the main headquarters located in Sacramento, California. Responsibilities Under the general direction of the Director, Educator Quality Center, the Educator Preparation Data Scientist will perform duties as outlined below: Data Systems Project Management -Design, develop, and manage systems and processes for collecting, extracting, loading, and integrating high-priority credential program and student data into the EdQ operational data store. -Lead the development of dashboards and reporting tools that support CSU educator preparation programs. -Collaborate with a wide range of stakeholders-including campus and Chancellor's Office staff, internal IT and IR&A teams, and external partners such as state and national education agencies, funding organizations, consultants, and vendors-to ensure data systems meet strategic and operational needs. Strategic Data Analysis -Conduct strategic data analyses to generate valid and reliable evidence that supports continuous improvement in CSU educator preparation programs and addresses California's educator workforce needs. -Translate analytical findings into actionable insights and communicate them clearly through data visualizations, storytelling, and presentations tailored to non-technical audiences. -Evaluate the impact of key initiatives using quantitative evidence and support data-informed decision-making across the CSU system. Support Effective Use of Data -Develop and refine systems and processes that improve the quality, consistency, and usability of existing educator preparation metrics administered by EdQ. -Define and monitor key performance indicators (KPIs) in collaboration with educator preparation faculty and practitioners to reflect their goals and values. -Promote the standardization and adoption of shared data tools, software platforms, and protocols across CSU educator preparation programs. Qualifications This position requires: -Demonstrated interest in improving educational outcomes, particularly in educator preparation. -Master's degree or higher in a technical, computational, or quantitative field (e.g., Data Science, Statistics, Computer Science, Economics, Educational Measurement, or related discipline). -A minimum of four years of professional experience, including at least one year of hands-on experience in data science, analytics, or applied research. -Proven ability to deliver data products, tools, or original research-preferably in an educational or public-sector context, using large or complex datasets. -Strong quantitative skills, including experience with statistical methods used in education research. -Proficiency with one or more statistical software tools or programming languages (e.g., R, Python, Stata). -Working knowledge of SQL, relational databases, and data visualization tools (e.g., Tableau, Power BI). -Experience with data mining, exploration, and visualization techniques. -Familiarity with data systems design, development, and management. -Strong project management skills and ability to work collaboratively with cross-functional teams. -Experience administering surveys using platforms such as Qualtrics, SurveyMonkey, or QuestionPro. -Experience working with sensitive or confidential data in a secure computing environment. Preferred Qualifications -Experience with ETL (Extract, Transform, Load) processes and data warehousing solutions. -Proficiency with version control systems such as Git. -Familiarity with data governance practices, especially in educational or public-sector environments. -Experience using AI or machine learning tools (e.g., OpenAI) to enhance data analysis, automation, or reporting workflows. -Knowledge of educator preparation policy, accreditation processes, or workforce analytics. Application Period Priority consideration will be given to candidates who apply by December 2, 2025. Applications will be accepted until the job posting is removed. How To Apply Please click "Apply Now" to complete the California State University, Chancellor's Office online employment application. Equal Employment Opportunity Consistent with California law and federal civil rights laws, the CSU provides equal opportunity in education and employment without unlawful discrimination or preferential treatment based on race, sex, color, ethnicity, or national origin. Reasonable accommodations will be provided for qualified applicants with disabilities who self-disclose by contacting the Senior Human Resources Manager at **************. Title IX Please view the Notice of Non-Discrimination on the Basis of Gender or Sex and Contact Information for Title IX Coordinator at: ********************************* E-Verify This position requires new hire employment verification to be processed through the E-Verify program administered by the Department of Homeland Security, U.S. Citizenship and Immigration Services (DHSUSCIS)' in partnership with the Social Security Administration (SSA). If hired, you will be required to furnish proof that you are legally authorized to work in the United States. The CSU Chancellor's Office is not a sponsoring agency for staff and Management positions (i.e., H1-B VISAS). COVID19 Vaccination Policy Per the CSU COVID-19 Vaccination Policy, it is strongly recommended that all Chancellor's Office employees who are accessing office and campus facilities follow COVID-19 vaccine recommendations adopted by the U.S. Centers for Disease Control and Prevention (CDC) and the California Department of Public Health (CDPH) applicable to their age, medical condition, and other relevant indications. Mandated Reporter Per CANRA The person holding this position is considered a 'mandated reporter' under the California Child Abuse and Neglect Reporting Act and is required to comply with the requirements set forth in CSU Executive Order 1083 as a condition of employment. CSU Out of State Employment Policy California State University, Office of the Chancellor, as part of the CSU system, is a State of California Employer. As such, the University requires all employees upon date of hire to reside in the State of California. As of January 1, 2022, the CSU Out-of-State Employment Policy prohibits the hiring of employees to perform CSU-related work outside the state of California. Background The Chancellor's Office policy requires that the selected candidate successfully complete a full background check (including a criminal records check) prior to assuming this position.
    $104.1k yearly 37d ago
  • Educator Preparation Data Scientist

    CSU Careers 3.8company rating

    Sacramento, CA jobs

    Chancellor's Office Statement Join our team at the California State University, Office of the Chancellor, and make a difference in providing access to higher education. We are currently seeking experienced candidates for the position of Educator Preparation Data Scientist. The CSU Chancellor's Office, located on the waterfront adjacent to the Aquarium of the Pacific in downtown Long Beach, is the headquarters for the nation's largest and most diverse system of higher education. The CSU Chancellor's Office offers a premium benefit package that includes outstanding vacation, health, and dental plans; a fee waiver education program; membership in the California Public Employees Retirement System (PERS); and 15 paid holidays a year. Salary The anticipated salary hiring range is up to $104,136 per year, commensurate with qualifications and experience. Classification Administrator II Position Information The California State University, Office of the Chancellor, is seeking an Educator Preparation Data Scientist to be responsible for managing, developing, and maintaining the Center's educator preparation data systems and dashboards. The Data Scientist performs strategic data analyses to inform continuous improvement efforts in CSU's educator preparation programs and supports campus data needs in ways that enhance the system's ability to recruit, prepare, develop, and retain outstanding teachers for California schools. The data initiatives overseen in this role directly advance the Chancellor's Office strategic plan, "CSU Forward: Thriving Students, Thriving University, Thriving California” by providing actionable evidence about CSU educator preparation programs. These initiatives support: -Public transparency and informed strategic planning and policy development; -Campus accreditation and compliance with state and federal requirements; -Evaluation of state, federal, and privately funded grants and initiatives; -Systemwide accountability, including the establishment of improvement goals and priorities; -Effective communication of the impact and value of university-based educator preparation programs; and -Statewide policy discussions related to educator preparation reform. THIS POSITION IS LOCATED AT THE CSU SACRAMENTO CAMPUS. This position is approved for telecommuting (two days telecommuting, three days in office (in-person)) with onsite work at the main headquarters located in Sacramento, California. Responsibilities Under the general direction of the Director, Educator Quality Center, the Educator Preparation Data Scientist will perform duties as outlined below: Data Systems Project Management -Design, develop, and manage systems and processes for collecting, extracting, loading, and integrating high-priority credential program and student data into the EdQ operational data store. -Lead the development of dashboards and reporting tools that support CSU educator preparation programs. -Collaborate with a wide range of stakeholders-including campus and Chancellor's Office staff, internal IT and IR&A teams, and external partners such as state and national education agencies, funding organizations, consultants, and vendors-to ensure data systems meet strategic and operational needs. Strategic Data Analysis -Conduct strategic data analyses to generate valid and reliable evidence that supports continuous improvement in CSU educator preparation programs and addresses California's educator workforce needs. -Translate analytical findings into actionable insights and communicate them clearly through data visualizations, storytelling, and presentations tailored to non-technical audiences. -Evaluate the impact of key initiatives using quantitative evidence and support data-informed decision-making across the CSU system. Support Effective Use of Data -Develop and refine systems and processes that improve the quality, consistency, and usability of existing educator preparation metrics administered by EdQ. -Define and monitor key performance indicators (KPIs) in collaboration with educator preparation faculty and practitioners to reflect their goals and values. -Promote the standardization and adoption of shared data tools, software platforms, and protocols across CSU educator preparation programs. Qualifications This position requires: -Demonstrated interest in improving educational outcomes, particularly in educator preparation. -Master's degree or higher in a technical, computational, or quantitative field (e.g., Data Science, Statistics, Computer Science, Economics, Educational Measurement, or related discipline). -A minimum of four years of professional experience, including at least one year of hands-on experience in data science, analytics, or applied research. -Proven ability to deliver data products, tools, or original research-preferably in an educational or public-sector context, using large or complex datasets. -Strong quantitative skills, including experience with statistical methods used in education research. -Proficiency with one or more statistical software tools or programming languages (e.g., R, Python, Stata). -Working knowledge of SQL, relational databases, and data visualization tools (e.g., Tableau, Power BI). -Experience with data mining, exploration, and visualization techniques. -Familiarity with data systems design, development, and management. -Strong project management skills and ability to work collaboratively with cross-functional teams. -Experience administering surveys using platforms such as Qualtrics, SurveyMonkey, or QuestionPro. -Experience working with sensitive or confidential data in a secure computing environment. Preferred Qualifications -Experience with ETL (Extract, Transform, Load) processes and data warehousing solutions. -Proficiency with version control systems such as Git. -Familiarity with data governance practices, especially in educational or public-sector environments. -Experience using AI or machine learning tools (e.g., OpenAI) to enhance data analysis, automation, or reporting workflows. -Knowledge of educator preparation policy, accreditation processes, or workforce analytics. Application Period Priority consideration will be given to candidates who apply by December 2, 2025. Applications will be accepted until the job posting is removed. How To Apply Please click "Apply Now" to complete the California State University, Chancellor's Office online employment application. Equal Employment Opportunity Consistent with California law and federal civil rights laws, the CSU provides equal opportunity in education and employment without unlawful discrimination or preferential treatment based on race, sex, color, ethnicity, or national origin. Reasonable accommodations will be provided for qualified applicants with disabilities who self-disclose by contacting the Senior Human Resources Manager at (562) 951-4070. Title IX Please view the Notice of Non-Discrimination on the Basis of Gender or Sex and Contact Information for Title IX Coordinator at: https://www2.calstate.edu/titleix E-Verify This position requires new hire employment verification to be processed through the E-Verify program administered by the Department of Homeland Security, U.S. Citizenship and Immigration Services (DHSUSCIS)' in partnership with the Social Security Administration (SSA). If hired, you will be required to furnish proof that you are legally authorized to work in the United States. The CSU Chancellor's Office is not a sponsoring agency for staff and Management positions (i.e., H1-B VISAS). COVID19 Vaccination Policy Per the CSU COVID-19 Vaccination Policy, it is strongly recommended that all Chancellor's Office employees who are accessing office and campus facilities follow COVID-19 vaccine recommendations adopted by the U.S. Centers for Disease Control and Prevention (CDC) and the California Department of Public Health (CDPH) applicable to their age, medical condition, and other relevant indications. Mandated Reporter Per CANRA The person holding this position is considered a 'mandated reporter' under the California Child Abuse and Neglect Reporting Act and is required to comply with the requirements set forth in CSU Executive Order 1083 as a condition of employment. CSU Out of State Employment Policy California State University, Office of the Chancellor, as part of the CSU system, is a State of California Employer. As such, the University requires all employees upon date of hire to reside in the State of California. As of January 1, 2022, the CSU Out-of-State Employment Policy prohibits the hiring of employees to perform CSU-related work outside the state of California. Background The Chancellor's Office policy requires that the selected candidate successfully complete a full background check (including a criminal records check) prior to assuming this position.
    $104.1k yearly 36d ago
  • Manager, HR Data Engineer

    Metropolitan Transportation Authority 4.6company rating

    New York, NY jobs

    at MTA Headquarters JOB TITLE: Manager, HR Data Engineer DEPT/DIV: People Department FULL/PART-TIME FULL SALARY RANGE: $110,000 - $123,000 DEADLINE: Until filled This position is eligible for teleworking, which is currently one day per week. New hires are eligible to apply 30 days after their effective hire date. Opening: The Metropolitan Transportation Authority is North America's largest transportation network, serving a population of 15.3 million people across a 5,000-square-mile travel area surrounding New York City, Long Island, southeastern New York State, and Connecticut. The MTA network comprises the nation's largest bus fleet and more subway and commuter rail cars than all other U.S. transit systems combined. MTA strives to provide a safe and reliable commute, excellent customer service, and rewarding opportunities. Position Objective: This position will report to the Director of HR Data Science and support the day-to-day data pipeline initiatives to design, build, and maintain ease for data structures to facilitate reporting and monitor key performance indicators. The incumbent will collaborate across Human Capital Management disciplines to identify internal/external data sources to design table structure, define ETL strategy, automate quality assurance checks, and implement scalable ETL solutions. Responsibilities: * Ensure that all assignments are completed with the highest quality and within agreed-upon Service Level agreement guidelines and Key Performance Indicator (KPI) targets. * Create and conduct a project/architecture design review. * Proficient knowledge of Azure Data Factory, Databricks, & Azure Delta Lake. * Experience programming languages (e.g., Python, R). * Working experience in data extraction using API. * Develop HR data pipelines and reports with advanced SQL programming language and maintain Data Warehousing/Data Lakes/Data Hubs and Analytical reporting Environment. * Work with IT teams to collect required data from internal and external systems and troubleshoot HR system issues. * Design and build modern data management solutions and create POC when necessary to test new approaches. * Create runbooks and actionable alerts as part of the development process. * Perform SQL and ETL tuning as necessary. * Perform ad hoc analysis as necessary. * Identify and implement continuous improvement initiatives as assigned. * Other duties as assigned. Qualifications: Knowledge/Skills/Abilities: * Strong understanding of data modeling principles, including Dimensional modeling, data normalization principles, etc. * Proficient understanding of SQL Engines and able to develop advanced queries and analytics * Familiarity with data exploration/data visualization tools like MS Power BI, PeopleSoft HCM, JobVite, * JDXpert, Jetdocs, or Oracle Analytics CloudAbility to think strategically, analyze, and interpret Human Capital Management and Financial data. * Strong communication skills - written and verbal presentations. * Excellent conceptual and analytical reasoning competencies. * Comfortable working in a fast-paced and highly collaborative environment. * Process-oriented with excellent documentation skills, including strong Excel & PowerPoint skills, Visio flows, and mock-up creation. Required Education and Experience: * Bachelor's degree in Computer Science, Information Management, Statistics, or Finance or related field. An equivalent combination of education and experience may be considered in lieu of a degree. * A minimum of four (4) years of relevant professional experience leading, implementing, and reporting on business key performance indicators in a data warehousing/lake/hub environment. * Minimum of four (4) years of experience using SQL for analytics, working with traditional relational databases and/or distributed systems such as PeopleSoft Enterprise Performance Management (EPM), Hadoop / Hive, BigQuery, Redshift, or Oracle Databases. Preferred: * Master's Degree in Computer Science, Information Management, or Statistics * Experience programming languages (e.g., Python, R) preferred. * Minimum of four (4) years of management experience * Minimum of one (1) year of experience with workflow management tools (Airflow, Oozie, Azkaban, UC4) * Understanding of Finance Data and Human Resource Data practices and procedures. * Proficient Knowledge of Data Repositories/warehouses/Lakes/Hubs. Other Information May need to work outside of normal work hours (i.e., evenings and weekends) Travel may be required to other MTA locations or other external sites. According to the New York State Public Officers Law & the MTA Code of Ethics, all employees who hold a policymaking position must file an Annual Statement of Financial Disclosure (FDS) with the NYS Commission on Ethics and Lobbying in Government (the "Commission"). Employees driving company vehicles must complete defensive driver training once every three years for current MNR drivers, or within 180 days of hire or transfer for an employee entering an authorized driving position. Equal Employment Opportunity MTA and its subsidiary and affiliated agencies are Equal Opportunity Employers, including those concerning veteran status and individuals with disabilities. The MTA encourages qualified applicants from diverse backgrounds, experiences, and abilities, including military service members, to apply.
    $110k-123k yearly Auto-Apply 45d ago
  • Senior Data Engineer (1043)

    City and County of San Francisco 3.0company rating

    San Francisco, CA jobs

    Department: DataSF Salary range: $153,686/year - $193,388/year Role type: Permanent Exempt Hours: Full-time (Hybrid work schedule) About: Permanent Exempt: This is an Exempt position excluded by the Charter from the competitive Civil Service examination process, pursuant to the City and County of San Francisco, Charter Section 10.104. It is considered "at will" and shall serve at the discretion of the Appointing Officer. Application Opening: Wednesday, November 19, 2025 Application Closing: Interested candidates are encouraged to apply as soon as possible, as this job announcement will close at any time, but not earlier than Friday, January 2, 2026 As part of the application process, you must also complete a Supplemental Questionnaire using this link: ************************************* The Office of the City Administrator and its 25+ divisions and departments operate core internal and public-facing services in San Francisco. The Office of the City Administrator's Mission and Vision Our vision is to lead the nation in public administration and to enable City departments to effectively deliver critical public services. We aim to help the city run better, to connect San Francisco residents and constituents to the vital public services they seek, and to create a meaningful and diverse work culture that is the place of choice for people who are invested in a career in public service. We are committed to ensuring that City services are inclusive, efficient, equitable, and culturally competent for San Franciscans of all races, ethnic backgrounds, religions, and sextual orientations. This commitment requires comprehensive review and thorough analysis of existing practices and policies to remove barriers to real inclusion. We are also committed to ensuring that we have a safe, equitable, and inclusive workplace for individuals of all races. This includes creating opportunities for hiring, promotion, training, and development, for all employees, including but not limited to, Black, Indigenous, and people of color (BIPOC). To learn more about our departments, divisions, and programs, visit: ********************************************* About DataSF: Want to build robust and scalable data infrastructure to power data-driven City services? Join the DataSF team to empower and expand the use of data in government! Company Description DataSF's mission is to transform the way San Francisco works with data. We believe that data, when harnessed effectively, can improve transparency, resident engagement, and government performance. We are committed to removing barriers and making it easier for all people to access services and knowledge, and we actively seek team members whose diverse life experiences reflect the residents we serve. Job Description DataSF is seeking a Senior Data Engineer with 3+ years of experience to join our growing team. Reporting to the Principal Data Engineer, you will be instrumental in designing, building, and maintaining the City's data infrastructure, enabling robust data pipelines and reliable data access for analytical and operational needs. This is an exciting position for someone eager to apply advanced data engineering techniques to complex urban challenges, contributing directly to San Francisco's commitment to efficient, equitable, and ethical service delivery. Learn more about DataSF's recent work on our blog. If you are an entrepreneurial and passionate data enthusiast, join our team to improve government through good use of data! Essential duties include, but are not limited to, the following: Platform Administration: Manage our central Snowflake data warehouse, including access control, security policies, resource monitoring, performance tuning, and cost optimization. Administer our platform with a focus on data democratization and accessibility, while protecting privacy and security. Pipeline Development: Build and maintain scalable and resilient pipelines to ingest and structure data from diverse sources. Design infrastructure to support both streaming and batch processes, and both structured and unstructured data sources. Infrastructure as Code (IaC): Use Terraform to define, deploy, and manage data infrastructure, ensuring our pipelines are reproducible, version-controlled, and production-ready. Best Practices & Innovation: Champion and implement best practices for documentation, data modeling, warehouse architecture, SQL optimization, and testing. Think creatively to find new ways to improve our data platform's capabilities and efficiency. Provide guidance to department partners on data engineering best practices. Collaboration: Work closely with data scientists, analysts, product managers, software engineers, and nontechnical stakeholders in diverse domains to understand data requirements and build solutions that meet their needs. Monitoring & Support: Proactively monitor the health of the data platform and pipelines, troubleshoot issues, and ensure high standards of data quality and availability. Desirable Qualifications Technical Knowledge Hands-on experience administering and developing on managed cloud data platforms such as Snowflake, BigQuery, or Databricks. Demonstrated expertise in writing advanced, performant SQL, and using tools like dbt for SQL-based data transformation and modeling. Strong programming skills in Python (with libraries like pandas, PySpark) for data processing and automation. Proficiency with an Infrastructure as Code tool, with a preference for Terraform. Experience building and deploying data pipelines using orchestration tools like Azure Data Factory, Airflow, Dagster, or similar technologies. Deep understanding of data warehousing concepts, data modeling, and modern ELT principles. Understanding of data governance, data security, and data privacy principles. Experience with real-time data streaming technologies (e.g., Kafka, Kinesis, Snowpipe). Experience deploying and managing data pipelines for machine learning models. Collaboration and Communication Skills Strong problem-solving skills with ability to design practical and effective data solutions Excellent verbal and written communication skills, including the ability to explain technical concepts to non-technical stakeholders A collaborative mindset with enthusiasm to work across diverse, cross-functional teams Mission Alignment Commitment to equity, transparency, and ethical data use Passion for public service and using data to improve government services Empathy for San Francisco's diverse communities and a drive to make data and services and more accessible for SF residents Interest or experience in public sector data or social impact work Qualifications Education: An associate degree in computer science, computer engineering, information systems, or a closely related field from an accredited college or university OR its equivalent in terms of total course credits/units [i.e., at least sixty (60) semester or ninety (90) quarter credits/units with a minimum of twenty (20) semester or thirty (30) quarter credits/units in one of the fields above or a closely-related field]. Experience: Three (3) years of experience analyzing, installing, configuring, enhancing, and/or maintaining the components of an enterprise network. Substitution: Additional experience as described above may be substituted for the required degree on a year-for-year basis (up to a maximum of two (2) years). One (1) year is equivalent to thirty (30) semester units/ forty-five (45) quarter units with a minimum of 10 semester / 15 quarter units in one of the fields above or a closely related field. Completion of the 1010 Information Systems Trainee Program may be substituted for the required degree. Verification: Please make sure it is clear in your application exactly how you meet the minimum qualifications. Applicants will be required to submit verification of qualifying education and experience during the recruitment and selection process. For information on how to verify experience and/or education requirements, including verifying foreign education credits or degree equivalency, please visit Verification of Experience and/or Education. Note: Falsifying one's education, training, or work experience or attempted deception on the application may result in disqualification for this and future job opportunities with the City and County of San Francisco. Applicants must meet the minimum qualification requirement by the final application deadline unless otherwise noted. Additional Information Selection Procedures The selection process will include evaluation of applications in relation to minimum requirements and assessment of candidates' job-related knowledge, skills and abilities. Depending on the number of applicants, the Department may establish and implement additional screening mechanisms to evaluate candidate qualifications. This typically includes an oral interview and/or a written or performance exercise. If this becomes necessary, only those applicants whose qualifications most closely meet the Department needs will be invited to continue in the selection process. Applicants meeting the minimum requirements are not guaranteed advancement in the selection process. To find Departments which use this classification, please see: *************************************************************************************************************************** Additional Information Regarding Employment with the City and County of San Francisco: Information About The Hiring Process Conviction History Employee Benefits Overview Equal Employment Opportunity Disaster Service Worker ADA Accommodation Veterans Preference Right to Work Copies of Application Documents Diversity Statement How to Apply: Applications for City and County of San Francisco jobs are only accepted through an online process. Visit ********************** and begin the application process. Interested candidates are encouraged to apply as soon as possible, as this job announcement will close at any time, but not before Friday, January 2, 2026 (11:59 PM, PST) Select the "Apply Now" button and follow instructions on the screen You MUST include a cover letter. As part of the application process, you must also complete a Supplemental Questionnaire using this link: ************************************* For best practices on the application process, please visit Apply for Jobs in the City and County of San Francisco Best Practices Guide. Applicants may be contacted by email about this announcement and, therefore, it is their responsibility to ensure that their registered email address is accurate and kept up-to-date. Also, applicants must ensure that email from CCSF is not blocked on their computer by a spam filter. To prevent blocking, applicants should set up their email to accept CCSF mail from the following ********************, @sfdpw.org, @sfport.com, @flysfo.com, @sfwater.org, @sfdph.org, @asianart.org, @sfmta.com, @sfpl.org, @dcyf.org, @first5sf.org, @famsf.org, @ccsf.edu, @smartalerts.info, ************************). Applicants will receive a confirmation email that their online application has been received in response to every announcement for which they file. Applicants should retain this confirmation email for their records. Failure to receive this email means that the online application was not submitted or received. All your information will be kept confidential according to EEO guidelines. HR Analyst Information: If you have any questions regarding this recruitment or application process, please contact Debbie Yung at [email protected] Condition of Employment: The City and County of San Francisco encourages women, minorities and persons with disabilities to apply. Applicants will be considered regardless of their sex, race, age, religion, color, national origin, ancestry, physical disability, mental disability, medical condition (associated with cancer, a history of cancer, or genetic characteristics), HIV/AIDS status, genetic information, marital status, sexual orientation, gender, gender identity, gender expression, military and veteran status, or other protected category under the law. The City and County of San Francisco encourages women, minorities and persons with disabilities to apply. Applicants will be considered regardless of their sex, race, age, religion, color, national origin, ancestry, physical disability, mental disability, medical condition (associated with cancer, a history of cancer, or genetic characteristics), HIV/AIDS status, genetic information, marital status, sexual orientation, gender, gender identity, gender expression, military and veteran status, or other protected category under the law. The City and County of San Francisco encourages women, minorities and persons with disabilities to apply. Applicants will be considered regardless of their sex, race, age, religion, color, national origin, ancestry, physical disability, mental disability, medical condition (associated with cancer, a history of cancer, or genetic characteristics), HIV/AIDS status, genetic information, marital status, sexual orientation, gender, gender identity, gender expression, military and veteran status, or other protected category under the law.
    $153.7k-193.4k yearly Easy Apply 35d ago
  • Data Engineer

    Services for The Underserved 4.1company rating

    New York, NY jobs

    SCOPE OF ROLE: The Data Engineer is responsible for building and maintaining a governed, analytics-ready data environment within Microsoft Fabric. This role designs and supports modern data pipelines that ingest, transform, and migrate data for enterprise analytics while ensuring high standards of data quality, metadata management, lineage, and cataloging. The Data Engineer plays a key role in establishing trust in data through strong governance practices and clear documentation. In addition, the role focuses on analytics and BI enablement, including the development and optimization of semantic models and close collaboration with BI teams to support Power BI reporting and self-service analytics. The ideal candidate brings strong SQL and data modeling skills, hands-on experience with Microsoft Fabric and related cloud data platforms, and the ability to work cross-functionally with technical and business stakeholders to deliver reliable, well-governed data solutions that support the organization's analytics strategy. ESSENTIAL DUTIES & RESPONSIBILITIES: Support and uphold the organization's data governance framework, ensuring compliance with enterprise standards. Collect, organize, and manage metadata for enterprise datasets to support analytics, governance, and Microsoft Fabric migration initiatives. Design and implement data pipelines that enable data ingestion, transformation, and migration into Microsoft Fabric. Build, maintain, and optimize semantic models for analytics, reporting, and self-service BI. Collaborate with BI teams to support Power BI report development, including dataset optimization and performance tuning. Ensure data quality, lineage, and cataloging practices are consistently implemented and maintained. Contribute to documentation, standards, and governance processes that support data literacy and platform sustainability. Work cross-functionally with analytics, governance, business owners, and application teams to deliver high-quality, governed data solutions. Job Requirements REQUIREMENTS: REQUIRED QUALIFICATIONS Bachelor's degree in Information Systems, Computer Science, or 5+ years of equivalent experience. Proficiency in SQL, Python, or other scripting languages for data engineering tasks. Proven experience with metadata collection and management. Experience building scalable ETL/ELT data pipelines. Hands-on experience developing semantic models for analytics and reporting. Experience supporting Power BI report development and optimization. Strong understanding of Microsoft Fabric architecture and components. Familiarity with data governance principles including quality, lineage, and stewardship. Hands-on experience with data modeling (star and snowflake schemas). Ability to collaborate effectively across technical and business teams. PREFERRED QUALIFICATIONS Experience with Microsoft Purview for metadata management and governance workflows. Knowledge of data security, compliance, and privacy standards. Experience with Fabric Data Factory, Azure Databricks, Synapse Analytics, or similar tools. Understanding of Azure Data Lake Storage and cloud-based analytics platforms. Experience contributing to data governance documentation or data literacy initiatives Soft Skills Strong analytical and problem-solving abilities. Excellent communication and collaboration skills. Ability to manage multiple priorities in a fast-paced environment. Attention to detail with a focus on data quality and consistency. Company Overview S:US IS AN EQUAL OPPORTUNITY EMPLOYER Join a team of employees who cares about the wellbeing of others. We're proud to offer a comprehensive benefits package designed to support your wellbeing and development. From health and wellness resources to generous PTO, professional development, and more, explore all that we offer on our Benefits Page and see how S:US invests in you. We believe in fostering a culture built on our core values: respect, integrity, support, maximizing individual potential and continuous quality improvement. S:US is an affirmative action and equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, disability, age, sexual orientation, gender identity, national origin, veteran status, or genetic information. S:US is committed to providing access, equal opportunity and reasonable accommodation for individuals with disabilities in employment, its services, programs, and activities, including allowance of the use of services animals. To request reasonable accommodation or if you believe such a request was improperly handled or denied, contact the Leave Team at **********************. ID 2025-18021
    $88k-113k yearly est. Auto-Apply 3d ago
  • Data Engineer

    Services for The Underserved 4.1company rating

    New York, NY jobs

    SCOPE OF ROLE: The Data Engineer is responsible for building and maintaining a governed, analytics-ready data environment within Microsoft Fabric. This role designs and supports modern data pipelines that ingest, transform, and migrate data for enterprise analytics while ensuring high standards of data quality, metadata management, lineage, and cataloging. The Data Engineer plays a key role in establishing trust in data through strong governance practices and clear documentation. In addition, the role focuses on analytics and BI enablement, including the development and optimization of semantic models and close collaboration with BI teams to support Power BI reporting and self-service analytics. The ideal candidate brings strong SQL and data modeling skills, hands-on experience with Microsoft Fabric and related cloud data platforms, and the ability to work cross-functionally with technical and business stakeholders to deliver reliable, well-governed data solutions that support the organization's analytics strategy. ESSENTIAL DUTIES & RESPONSIBILITIES: Support and uphold the organization's data governance framework, ensuring compliance with enterprise standards. Collect, organize, and manage metadata for enterprise datasets to support analytics, governance, and Microsoft Fabric migration initiatives. Design and implement data pipelines that enable data ingestion, transformation, and migration into Microsoft Fabric. Build, maintain, and optimize semantic models for analytics, reporting, and self-service BI. Collaborate with BI teams to support Power BI report development, including dataset optimization and performance tuning. Ensure data quality, lineage, and cataloging practices are consistently implemented and maintained. Contribute to documentation, standards, and governance processes that support data literacy and platform sustainability. Work cross-functionally with analytics, governance, business owners, and application teams to deliver high-quality, governed data solutions. Qualifications REQUIREMENTS: REQUIRED QUALIFICATIONS Bachelor's degree in Information Systems, Computer Science, or 5+ years of equivalent experience. Proficiency in SQL, Python, or other scripting languages for data engineering tasks. Proven experience with metadata collection and management. Experience building scalable ETL/ELT data pipelines. Hands-on experience developing semantic models for analytics and reporting. Experience supporting Power BI report development and optimization. Strong understanding of Microsoft Fabric architecture and components. Familiarity with data governance principles including quality, lineage, and stewardship. Hands-on experience with data modeling (star and snowflake schemas). Ability to collaborate effectively across technical and business teams. PREFERRED QUALIFICATIONS Experience with Microsoft Purview for metadata management and governance workflows. Knowledge of data security, compliance, and privacy standards. Experience with Fabric Data Factory, Azure Databricks, Synapse Analytics, or similar tools. Understanding of Azure Data Lake Storage and cloud-based analytics platforms. Experience contributing to data governance documentation or data literacy initiatives Soft Skills Strong analytical and problem-solving abilities. Excellent communication and collaboration skills. Ability to manage multiple priorities in a fast-paced environment. Attention to detail with a focus on data quality and consistency. Company Overview S:US IS AN EQUAL OPPORTUNITY EMPLOYER Join a team of employees who cares about the wellbeing of others. We're proud to offer a comprehensive benefits package designed to support your wellbeing and development. From health and wellness resources to generous PTO, professional development, and more, explore all that we offer on our Benefits Page and see how S:US invests in you. We believe in fostering a culture built on our core values: respect, integrity, support, maximizing individual potential and continuous quality improvement. S:US is an affirmative action and equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, disability, age, sexual orientation, gender identity, national origin, veteran status, or genetic information. S:US is committed to providing access, equal opportunity and reasonable accommodation for individuals with disabilities in employment, its services, programs, and activities, including allowance of the use of services animals. To request reasonable accommodation or if you believe such a request was improperly handled or denied, contact the Leave Team at **********************. ID 2025-18021
    $88k-113k yearly est. Auto-Apply 3d ago
  • Senior Data Engineer

    Realtor.com 3.9company rating

    Austin, TX jobs

    Recognized as the No. 1 site trusted by real estate professionals, Realtor.com has been at the forefront of online real estate for over 25 years, connecting buyers, sellers, and renters with trusted insights and expert guidance to find their perfect home. Through its robust suite of tools, Realtor.com not only makes a significant impact on the real estate industry at large, but for consumers, navigating the biggest purchase they will make in their life, by providing a user experience that is easy to use, easy to understand, and most of all, easy to make decisions. Join us on our mission to empower more people to find their way home by breaking barriers to entry, making the right connections, and building confidence through expert guidance. The Work You'll Do: You'll design, build, and operate backend services and data integrations that power personalized marketing experiences across CRM and Digital Marketing channels at Realtor.com. You'll work across the marketing technology stack-connecting internal data platforms to tools that activate, measure, and optimize campaigns. Your work will ensure the reliability, observability, and scalability of data pipelines that enable millions of users to receive relevant messages at the right time. * Design, build, and operate backend services and data integrations that are secure, observable, and cost-efficient, using modern cloud patterns on AWS. * Collaborate with data platform teams on the Snowflake and Kafka data platform (streaming and batch) to enable reliable data collection, processing, and access for near real-time marketing and analytics use cases. * Contribute to streaming pipelines and consumers (e.g., Kafka/MSK) and batch workflows backed by Snowflake/DBT/Airflow/Hightouch, with a focus on correctness, performance, and operational excellence. * Implement and extend CI/CD pipelines on the company's deployment platform (CircleCI GitHub + Argo), driving paved-path adoption, test automation, and deployment safety. * Help instrument, validate, and monitor clickstream and service events in partnership with data reliability teams, improving event health and taxonomy alignment across systems. * Strengthen observability for services and pipelines (metrics, logging, tracing) to reduce mean time to detection/recovery and enable effective on-call rotations. * Partner with cross-functional stakeholders (marketing, engineering, product, analytics, privacy) to define requirements, de-risk designs, and ship incremental value. * Write clear technical documentation, contribute to runbooks, and participate in design and code reviews to raise engineering quality and knowledge sharing. * Champion reliability and governance: adopt data lineage, ownership, and catalog tooling; incorporate validation and guardrails that prevent bad data from propagating. What You'll Bring: * 5+ years of experience building backend systems and/or data-intensive services in production, with ownership across design, implementation, and operations. * Bachelor's degree or equivalent experience.. * Proficiency in one or more of: TypeScript/Node.js, Python, Java/Kotlin; strong foundation in writing maintainable, testable code. * Solid SQL skills and experience with Snowflake or comparable cloud data warehouses; familiarity with data modeling and working with structured/semi-structured data. * Experience with event-driven systems and streaming technologies (e.g., Kafka/MSK) and building resilient consumers/producers * Hands-on with AWS services (e.g., Lambda, ECS/EKS, S3, IAM, MSK, API Gateway) and Infrastructure-as-Code (Terraform or CloudFormation). * CI/CD expertise (preferably on CircleCI + GitHub + Argo) and a testing mindset across unit, integration, contract, and data validations. * Strong observability practices (logs, metrics, traces) using tools like New Relic and Splunk; effective incident response and post-incident improvement. * Excellent collaboration and communication skills; ability to partner with product, analytics, and other engineering teams to deliver outcomes. Nice To Have Experience: * Experience with CRM and digital marketing platforms (Braze, Cordial, Google Ads, Meta Ads, etc) * Experience with the Snowflake data platform and its streaming/batch components; DBT and Airflow familiarity. * Background in clickstream data reliability: event taxonomy, validation tooling, event health monitoring, and developer workflows that "shift-left" data quality. * Familiarity with tracking and SDK ecosystems (e.g., Tracking SDK and "Direct Send"), experimentation, and downstream analytics integrations. * Knowledge of data governance practices and lineage/discoverability platforms like the Data Catalog; exposure to privacy and compliance workflows (e.g., CCPA/CPRA). * Experience operating services on Kubernetes (EKS), GitOps/ArgoCD, and adopting paved-path CI/CD patterns on CircleCI. How We Work: We balance creativity and innovation on a foundation of in-person collaboration. Our employees work three days in our Austin headquarters where they have the opportunity to collaborate in-person, adding richness to our culture and knitting us closer together. How We Reward You: Realtor.com is committed to investing in the health and wellbeing of our employees and their families. Our benefits programs include, but are not limited to: * Inclusive and Competitive medical, Rx, dental, and vision coverage * Family forming benefits * 13 Paid Holidays * Flexible Time Off * 8 hours of paid Volunteer Time off * Immediate eligibility into Company 401(k) plan with 3.5% company match * Tuition Reimbursement program for degreed and non-degreed programs * 1:1 personalized Financial Planning Sessions * Student Debt Retirement Savings Match program * Free snacks and refreshments in each office location Do the best work of your life at Realtor.com Here, you'll partner with a diverse team of experts as you use leading-edge tech to empower everyone to meet a crucial goal: finding their way home. And you'll find your way home too. At Realtor.com, you'll bring your full self to work as you innovate with speed, serve our consumers, and champion your teammates. In return, we'll provide you with a warm, welcoming, and inclusive culture; intellectual challenges; and the development opportunities you need to grow. Diversity is important to us, therefore, Realtor.com is an Equal Opportunity Employer regardless of age, color, national origin, race, religion, creed, gender, sex, sexual orientation, gender identity and/or expression, marital status, status as a disabled veteran and/or veteran of the Vietnam Era or any other characteristic protected by federal, state or local law. In addition, Realtor.com will provide reasonable accommodations for otherwise qualified disabled individuals.
    $75k-104k yearly est. Auto-Apply 34d ago
  • CRM - Data Engineer

    Technology, Automation, and Management 3.6company rating

    Reston, VA jobs

    Mission Objectives - The Data Engineer will support government modernization initiatives by executing complex data migrations from legacy government sources to cloud-based platforms, ensuring exceptional data quality and providing insightful visualizations that enable government stakeholders to make informed decisions. This position is integral to the overall contract success by serving as the technical backbone for data transformation services, working closely with government officials and supporting programming teams to deliver comprehensive Azure-based data solutions that enhance operational efficiency and decision-making capabilities. The Data Engineer will be responsible for designing and implementing robust data pipelines, maintaining data integrity throughout migration processes, and creating actionable insights through advanced data transformation and visualization techniques. This role directly supports contract deliverables by ensuring government data assets are properly transformed, secured, and made accessible for analysis and reporting purposes. Execute complex data migrations from various government sources to Azure cloud platforms while maintaining data integrity and security compliance Design, implement, and maintain ETL/ELT processes and data pipelines using Azure Data Factory, Azure Functions, and Azure Data Hub Measure, assess, and enhance data quality across government datasets through comprehensive validation frameworks and transformation protocols Develop and optimize data warehousing solutions with focus on database performance optimization and scalability Create insightful data visualizations and reports that enable government stakeholders to make informed operational decisions Interface directly with government officials to gather requirements, provide technical guidance, and present data insights to non-technical stakeholders Collaborate with programming teams to implement Azure microservices architecture and provide technical support for data-related development needs Maintain comprehensive documentation of technical processes, data lineage, and system architecture to support contract compliance and knowledge transfer requirements
    $77k-107k yearly est. 60d+ ago
  • Data Scientist

    Bureau of National Affairs 4.7company rating

    Arlington, VA jobs

    You are a research scientist and engineer who wants to work in the areas of machine learning, natural language processing, information extraction, graphical models, summarization, information retrieval, recommend systems, and/or knowledge graphs. You will bring new ideas, evangelizing them, and shepherding their adoption within the team. You extract and identify relevant, meaningful, and actionable information from structured and unstructured data in real-time and provide advanced ways of accessing this data (such as search, summarization, and recommendations). What you will do: * Write code (Python, R, SQL, Java, etc.) to obtain, clean, manipulate, and analyze data. * Retrieve, synthesize, and present critical data in a format that is immediately useful to answering specific questions or improving system performance. * Analyze historical data to identify trends and support optimal decision making. * Build machine learning and statistical models to solve specific business problems. * Leverage AI/LLM frameworks to develop internal and external solutions. * Identify opportunities for improving workflows to increase data quality, accuracy, and timeliness. * Design experiments to test hypotheses and measure the effectiveness of solutions. * Formalize assumptions about how our systems should work, create statistical definitions of outliers, and develop methods to systematically identify outliers. Determine why such examples are outliers and if action is needed. * Work across teams to deliver enhancements, features, and time-sensitive projects. * Deliver reports or presentations to share insights to audiences of varying levels of technical sophistication. You need to have: * Master's degree in Data Science, Statistics, Computer Science, Mathematics, or a related field. * 5-7 years of experience working with large projects and diverse data sets, including preprocessing, cleansing, and verifying the integrity of data. * Proficiency in Python and SQL, R, or Java, and experience with data analysis libraries (pandas, NumPy, scikit-learn). * Experience with data visualization tools (Tableau, Power BI, QuickSight, matplotlib, seaborn). * Experience with Database Management Systems (Oracle, PostgreSQL, MySQL, Redshift, etc.). * Experience with distributed computational frameworks (YARN, Spark, Hadoop, Kubernetes, Docker), cloud-based computing (Apache Solr, Lucene, or Elasticsearch). * Knowledge of descriptive and inferential statistics, regression, supervised and unsupervised learning methods, multivariate and univariate hypothesis testing. * Strong problem-solving and analytical skills. * Excellent communication skills and willingness to learn from senior team members. * Effective project management skills and ability to prioritize tasks. * Ability to work quickly, accurately, and efficiently in a fast-paced environment with shifting priorities. Equal Opportunity Bloomberg Industry Group maintains a continuing policy of non-discrimination in employment. It is Bloomberg Industry Group's policy to provide equal opportunity and access for all persons, and the Company is committed to attracting, retaining, developing, and promoting the most qualified individuals without regard to age, ancestry, color, gender identity or expression, genetic predisposition or carrier status, marital status, national or ethnic origin, race, religion or belief, sex, sexual orientation, sexual and other reproductive health decisions, parental or caring status, physical or mental disability, pregnancy or maternity/parental leave, protected veteran status, status as a victim of domestic violence, or any other classification protected by applicable law ("Protected Characteristic"). Bloomberg prohibits treating applicants or employees less favorably in connection with the terms and conditions of employment, in all phases of the employment process, because of one or more Protected Characteristics ("Discrimination").
    $74k-107k yearly est. Auto-Apply 11d ago
  • Data Warehouse Developer

    Arizona Department of Administration 4.3company rating

    Phoenix, AZ jobs

    AHCCCS Arizona Health Care Cost Containment System Accountability, Community, Innovation, Leadership, Passion, Quality, Respect, Courage, Teamwork The Arizona Health Care Cost Containment System (AHCCCS), Arizona's Medicaid agency, is driven by its mission to deliver comprehensive, cost-effective health care to Arizonans in need. AHCCCS is a nationally acclaimed model among Medicaid programs and a recipient of multiple awards for excellence in workplace effectiveness and flexibility. AHCCCS employees are passionate about their work, committed to high performance, and dedicated to serving the citizens of Arizona. Among government agencies, AHCCCS is recognized for high employee engagement and satisfaction, supportive leadership, and flexible work environments, including remote work opportunities. With career paths for seasoned professionals in a variety of fields, entry-level positions, and internship opportunities, AHCCCS offers meaningful career opportunities in a competitive industry. Come join our dynamic and dedicated team. Data Warehouse Developer Information Services Division (ISD) Job Location: Address: 150 N. 18th Ave, Phoenix, AZ 85007 Posting Details: Must Reside in Arizona. Salary: $64,000 - $74,000 Grade: 26 FLSA: Exempt Closing Date: Open Until Filled This position may offer the ability to work remotely, within Arizona, based upon the agency's business needs and continual meeting of expected performance measures. Job Summary: A career in public service awaits you. COME JOIN OUR TEAM! A great benefit of working for the State of Arizona is a fantastic work/life balance. State employees enjoy challenging work, popular remote work options, comprehensive health and wellness benefits, and career growth opportunities.The State of Arizona ranks #30 in Healthiest 100 Workplaces in America! This recognition honors organizations that champion employee wellbeing through innovative health programs, inclusive wellness initiatives, and a culture rooted in care. What You'll Do to Contribute to Agency Success: The Information Services Division (ISD) is looking for a highly motivated individual to join our team as a Data Warehouse Developer for the Business Intelligence team within the Information Services Division at AHCCCS. The Data Warehouse developer will work with various customers throughout the agency to identify data needs and then define, develop and test new extract, transform, and load (ETL) solutions. This person will also be asked to maintain core production systems that manage the ETL processes, reporting application, and data warehouse. Occasionally this position will entail being on-call for production and technology support to ensure the timeliness and accuracy of all data loads. The AHCCCS Data Warehousing and Business Intelligence (DW/BI) team plays a critical role in enabling data-driven decision-making that supports Arizona's Medicaid programs and mission to deliver cost-effective, high-quality healthcare. The team manages an enterprise analytics ecosystem built on Informatica for ETL orchestration, Azure Synapse Dedicated SQL Pool for data warehousing, Cognos and Power BI for reporting and analytics, and Control-M for production workflow management. We are currently leading a major modernization initiative to migrate to Microsoft Fabric, which unifies our data engineering, data warehousing, and analytics capabilities within a single cloud-native platform. This transformation will empower AHCCCS to deliver faster insights, scale efficiently, and leverage advanced AI and automation for smarter analytics. Joining our team means being part of an environment that values innovation, collaboration, and continuous improvement. You'll have the opportunity to contribute to the future of AHCCCS' enterprise data architecture-designing, building, and optimizing modern data solutions that directly impact the agency's ability to serve Arizonans more effectively. Major duties and responsibilities include but are not limited to: • Manage data extractions, transformations and load procedures including analysis of existing source systems and data sources and monitor ETL workflows. Translate functional and technical requirements into detailed architecture, design and extensible code. Provide expertise in SQL development and data analysis. • Support and enhance existing extract, transform, and load processes. Create technical design documents that capture source to target mapping, define transformation algorithms, and document data quality rules, data flows and transformations. • Actively participate in multi-divisional projects as a data lead. Interface with all areas affected by the project including end users, peers, and infrastructure teams. Provide technical and analytical guidance for enterprise data warehouse architecture and implement best practices in application report development. • Works with stakeholders to identify analytical requirements. QA and troubleshoot data; and develop, test, and validate data sets to be utilized in various analytical reports and dashboards throughout the agency. • Provide periodic 24/7 on call support for production issues. • Attend meetings/seminars within and outside the Agency with regard to responsibilities and maintain an effective working relationship with those contacted in the course of work. Knowledge, Skills & Abilities (KSAs): Knowledge: • Data Warehouse concepts, MPP Best practices, and BI strategies/procedures • ETL tools and processes (i.e. Informatica, ADF, Notebooks) • Azure SQL/Synapse database management systems, warehouse & lakehouse (Databricks/Fabric) architectures • Strong understanding of T-SQL, relational or dimensional data modeling & Power BI • Experience working in Linux environment • Medicaid/Medicare care programs and services Skills: • Business Requirements Analysis • SQL query optimization • ETL design and development • Data Analysis / Data Modeling • Written and verbal communication • Interpersonal Relationships • Analysis and Problem Solving • Technical writing Abilities: • Design and manage the BI tools and applications environment • Multi-task with quick turnaround times while under pressure • Develop, manage, and update data models, including physical and logical models of the data warehouse, datamart, and staging area • Set and meet deadlines • Work with and understand large data sets • QA and troubleshoot data • Work in a remote environment Qualifications: Minimum: • Any combination of training and experience that meet the knowledge, skills, and abilities (KSAs); typical ways KSAs are obtained may include but are not limited to: relevant degree from an accredited college or university, coursework and respective work experience. Minimum of 2 years of ETL, 1 year in Power BI reporting or Tabular modeling, 2 years in SQL or Synapse experience. Preferred: • Bachelor's degree in Computer Information Systems, Computer Science or related area. 2+ years of data warehousing experience related to databases and BI tools. 2+ years of experience defining and developing ETL processes. 2+ years in Synapse. 2+ years in PBI/Tabular. Experience in Microsoft Fabric. Knowledge of health care operations and terminology; preferably within Medicaid. Pre-Employment Requirements: • Successfully pass fingerprint background check, prior employment verifications and reference checks; employment is contingent upon completion of the above-mentioned process and the agency's ability to reasonably accommodate any restrictions. • Travel may be required for State business. Employees who drive on state business must complete any required driver training (see Arizona Administrative Code R2-10-207.12.) If this position requires driving or the use of a vehicle as an essential function of the job to conduct State business, then the following requirements apply: Driver's License Requirements. All newly hired State employees are subject to and must successfully complete the Electronic Employment Eligibility Verification Program (E-Verify). Benefits: Among the many benefits of a career with the State of Arizona, there are: • 10 paid holidays per year • Paid Vacation and Sick time off (13 and 12 days per year respectively) - start earning it your 1st day (prorated for part-time employees) • Paid Parental Leave-Up to 12 weeks per year paid leave for newborn or newly-placed foster/adopted child. Learn more about the Paid Parental Leave pilot program here. • Other Leaves - Bereavement, civic duty, and military. • A top-ranked retirement program with lifetime pension benefits • A robust and affordable insurance plan, including medical, dental, life, and disability insurance • Participation eligibility in the Public Service Loan Forgiveness Program (must meet qualifications) • RideShare and Public Transit Subsidy • A variety of learning and career development opportunities By providing the option of a full-time or part-time remote work schedule, employees enjoy improved work/life balance, report higher job satisfaction, and are more productive. Remote work is a management option and not an employee entitlement or right. An agency may terminate a remote work agreement at its discretion. Learn more about the Paid Parental Leave pilot program here. For a complete list of benefits provided by The State of Arizona, please visit our benefits page Retirement: Lifetime Pension Benefit Program • Administered through the Arizona State Retirement System (ASRS) • Defined benefit plan that provides for life-long income upon retirement. • Required participation for Long-Term Disability (LTD) and ASRS Retirement plan. • Pre-taxed payroll contributions begin after a 27-week waiting period (prior contributions may waive the waiting period). Deferred Retirement Compensation Program • Voluntary participation. • Program administered through Nationwide. • Tax-deferred retirement investments through payroll deductions. Contact Us: Persons with a disability may request a reasonable accommodation such as a sign language interpreter or an alternative format by emailing ********************. Requests should be made as early as possible to allow time to arrange the accommodation. The State of Arizona is an Equal Opportunity/Reasonable Accommodation Employer.
    $64k-74k yearly 60d+ ago

Learn more about B & P Enterprises jobs