Post job

Senior Engineer jobs at Philips - 915 jobs

  • Senior Mask Layout Engineer - Hybrid, Analog CMOS

    Nvidia Corporation 4.9company rating

    Santa Clara, CA jobs

    A leading technology company in California is seeking a Senior Mask Layout Design Engineer to perform physical layout for digital and mixed-signal functions. You'll collaborate with multi-disciplinary teams and have a significant role in mentoring junior mask designers. Ideal candidates should have a BS in Electrical Engineering and over 7 years of layout design experience with expertise in Cadence tools. This position offers a competitive salary and a hybrid work model. #J-18808-Ljbffr
    $138k-180k yearly est. 1d ago
  • Job icon imageJob icon image 2

    Looking for a job?

    Let Zippia find it for you.

  • Senior Logic Design Engineer - Remote

    Intel Corporation 4.7company rating

    San Jose, CA jobs

    A leading technology company in California seeks an IP Logic Design Engineer with a Bachelor's degree in relevant fields and at least 3 years of experience in IP design. The role includes designing, integrating, and validating silicon solutions while collaborating with architecture teams. Preferred qualifications include experience with scripting, hardware validation, and industry protocols. This position allows for remote work with competitive compensation ranging from $164,470 to $232,190 USD annually. #J-18808-Ljbffr
    $164.5k-232.2k yearly 1d ago
  • GPU Clocking Engineer - SOC & High-Speed Design (Hybrid)

    Intel Corporation 4.7company rating

    Santa Clara, CA jobs

    A leading technology company is seeking a GPU Physical Design Engineer to drive advanced clocking solutions. The role involves high-speed clock distribution and collaboration with cross-functional teams. Applicants should have a Bachelor's degree with significant industry experience, strong skills in circuit simulations, and experience in SOC Clock Implementation. This position offers competitive compensation and a hybrid work model allowing flexibility between on-site and off-site work. #J-18808-Ljbffr
    $106k-140k yearly est. 1d ago
  • GPU Physical Design Engineer

    Intel Corporation 4.7company rating

    Santa Clara, CA jobs

    # **Welcome!**## .GPU Physical Design Engineer page is loaded## GPU Physical Design Engineerlocations: US, California, Folsom: US, California, Santa Claratime type: Full timeposted on: Posted Todayjob requisition id: JR0279213# **Job Details:**## Job Description:Are you interested in working in a fast-paced, leading-edge environment with endless possibilities of innovating and learning, then our Graphics Hardware IP Team (GHI) team has an opportunity for you. In GHI we are passionate about delivering best-in-class visual experiences that enable users to immerse themselves in a new visual future. Within GHI you will be part of a Special Circuits Horizontal team that is responsible for local and global clocking of large designs like GFX Imaging processors, Peripheral subsystems like PCIe, Type-C, Display, Media and SOCs etc. We are looking for Graphics Hardware Clocking/Engineer to join the team.**The primary responsibilities for this role will include, but are not limited to:*** Ownership of complex highspeed global and local clock distribution network to meet the Power and Performance targets of these differentiating designs.* Work with Architects, PnP and Execution teams to identify right solutions in a timely manner.## **Qualifications:****A successful candidate will have proven experience demonstrating the following skills and behavioral traits:*** Team player with good problem-solving skills.* Strong written and verbal communication skills.**Minimum Qualifications**:Minimum qualifications are required to be initially considered for this position.* Bachelor's in Electrical/ Electronics/Computer Engineering, Computer Science or related field with at least 10 years of industry experience. Or a Master's degree in the same fields with at least 8 years of industry experience.* Advanced knowledge of Spice level circuit simulations.* Advanced experience in global and local clocking topologies.* 6+ years of hands on SOC Clock Implementation experience.* Basic understanding of RV and FEV flows.* Basic Scripting knowledge.## Job Type:Experienced Hire## Shift:Shift 1 (United States of America)## Primary Location:US, California, Folsom## Additional Locations:US, California, Santa Clara## Business group:Intel makes possible the most amazing experiences of the future. You may know us for our processors. But we do so much more. Intel invents at the boundaries of technology to make amazing experiences possible for business and society, and for every person on Earth. Harnessing the capability of the cloud, the ubiquity of the Internet of Things, the latest advances in memory and programmable solutions, and the promise of always-on 5G connectivity, Intel is disrupting industries and solving global challenges. Leading on policy, diversity, inclusion, education and sustainability, we create value for our stockholders, customers, and society.## Posting Statement:All qualified applicants will receive consideration for employment without regard to race, color, religion, religious creed, sex, national origin, ancestry, age, physical or mental disability, medical condition, genetic information, military and veteran status, marital status, pregnancy, gender, gender expression, gender identity, sexual orientation, or any other characteristic protected by local law, regulation, or ordinance.## ## Position of TrustN/A**Benefits:**We offer a total compensation package that ranks among the best in the industry. It consists of competitive pay, stock, bonuses, as well as, benefit programs which include health, retirement, and vacation. Find more information about all of our Amazing Benefits here:Annual Salary Range for jobs which could be performed in the US: $161,230.00-227,620.00 USDThe range displayed on this job posting reflects the minimum and maximum target compensation for the position across all US locations. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific compensation range for your preferred location during the hiring process.**Work Model for this Role**This role will be eligible for our hybrid work model which allows employees to split their time between working on-site at their assigned Intel site and off-site. \* Job posting details (such as work model, location or time type) are subject to change. #J-18808-Ljbffr
    $161.2k-227.6k yearly 1d ago
  • Senior Logic Design Engineer

    Intel Corporation 4.7company rating

    San Jose, CA jobs

    # **Welcome!**## .# **Job Details:**## :Job Description**Do Something Wonderful!**Intel put the Silicon in Silicon Valley. No one else is this obsessed with engineering a brighter future. Every day, we create world changing technology that enriches the lives of every person on earth. So, if you have a big idea, let's do something wonderful together. Join us, because at Intel, we are building a better tomorrow.or the links below!**WHO WE ARE:**We are a Custom IP and Silicon engineering team part of Intel's Silicon Engineering Group. The team works on design and verification of cutting edge IP and SoCs geared towards Intel's advanced Data center and AI SoCs. We look to drive major technological and methodological advancements across multiple areas of IP and SoC Design and Verification, looking to set a high bar across the organization and ensure that Intel has a competitive product in the market.**WHO YOU ARE:**As an IP Logic Design Engineer your responsibilities will include but are not limited to:* Designing and/or integrating IP for Intel's Custom Silicon solutions.* You will be working or assisting in architecture, design, implementation, formal verification, emulation and validation.* Creating a design to produce key assets that help improve product KPIs for discrete graphics products.* Working with SoC Architecture and platform architecture teams to establish silicon requirements.* Making appropriate design trade off balancing risk, area, power, performance, validation complexity and schedule.* Creating micro architectural specification document for the design.* Working with external vendors on tools or IPs required for the development of micro-architecture, design and design qualification of custom silicon designs.* Driving vendor's methodology to meet world class silicon design standards.* Architecting area and power efficient low latency designs with scalabilities and flexibilities.* Power and Area efficient RTL logic design and DV support.* Running tools to ensure lint-free and CDC/RDC clean design, VCLP.* Synthesis and timing constraints.* Reviews the verification plan and implementation to ensure design features are verified correctly and resolves and implements corrective measures for failing RTL tests to ensure correctness of features.## **Qualifications:****Minimum Qualifications:**Bachelor's degree in Computer Science, Electrical Engineering, Computer Engineering, or a related field with 3+ years of relevant experience- or -Master's degree in the same fields with 2+ years of relevant experience- or -PhD in the same fields.Relevant work experience should be of the following:* Experience with complex IP/ASIC/SOC Design Implementation.* Experience in system and processor architecture.* Experience with System Verilog/SOC development environment.Preferred Qualifications:* Experience in scripting languages (i.e. PERL, TCL, or Python).* Experience with Hardware validation techniques (i.e. formal Verification, Test and Function Verification).* Experience designing and implementing complex blocks like CPUs, GPU, Media blocks, and Memory controller.* Experience in leading small team of engineers.* Experience with Industry standard protocols (i.e. PCIE, USB, DDR, etc).* Experience with interaction of computer hardware with software.* Experience with Low power/UPF implementation/verification techniques.* Experience with Formal verification techniques.## Job Type:Experienced Hire## Shift:Shift 1 (United States of America)## Primary Location:Virtual US## Additional Locations:## Business group:At the Data Center Group (DCG), we're committed to delivering exceptional products and delighting our customers. We offer both broad-market Xeon-based solutions and custom x86-based products, ensuring tailored innovation for diverse needs across general-purpose compute, web services, HPC, and AI-accelerated systems. Our charter encompasses defining business strategy and roadmaps, product management, developing ecosystems and business opportunities, delivering strong financial performance, and reinvigorating x86 leadership. Join us as we transform the data center segment through workload driven leadership products and close collaboration with our partners.## Posting Statement:All qualified applicants will receive consideration for employment without regard to race, color, religion, religious creed, sex, national origin, ancestry, age, physical or mental disability, medical condition, genetic information, military and veteran status, marital status, pregnancy, gender, gender expression, gender identity, sexual orientation, or any other characteristic protected by local law, regulation, or ordinance.## ## Position of TrustN/A## BenefitsWe offer a total compensation package that ranks among the best in the industry. It consists of competitive pay, stock bonuses, and benefit programs which include health, retirement, and vacation. Find out more about the .Annual Salary Range for jobs which could be performed in the US: $164,470.00-232,190.00 USDThe range displayed on this job posting reflects the minimum and maximum target compensation for the position across all US locations. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific compensation range for your preferred location during the hiring process.**Work Model for this Role**This role is available as a fully home-based and generally would require you to attend Intel sites only occasionally based on business need. However, you must live and work from the country specified in the job posting, in which Intel has a legal presence. Due to legal regulations, remote work from any other country is unfortunately not permitted. \* Job posting details (such as work model, location or time type) are subject to change.The application window for this job posting is expected to end by 12/31/2027\*ADDITIONAL INFORMATION: Intel is committed to Responsible Business Alliance (RBA) compliance and ethical hiring practices. We do not charge any fees during our hiring process. Candidates should never be required to pay recruitment fees, medical examination fees, or any other charges as a condition of employment. If you are asked to pay any fees during our hiring process, please report this immediately to your recruiter. #J-18808-Ljbffr
    $164.5k-232.2k yearly 1d ago
  • Senior Logic Design Engineer - Remote

    Intel Corporation 4.7company rating

    San Jose, CA jobs

    A leading technology firm is seeking an IP Logic Design Engineer to design and integrate IP for custom silicon solutions. The role requires a Bachelor's degree in a relevant field with 3+ years of experience or a Master's with 2+ years. Key responsibilities include architecture and design implementation, working with cross-functional teams, and vendor collaboration. Competitive compensation, with an annual salary range of $164,470-232,190, is offered along with a strong benefits package. #J-18808-Ljbffr
    $164.5k-232.2k yearly 1d ago
  • Senior Silicon Systems Engineer: Power & Performance

    Nvidia Corporation 4.9company rating

    Santa Clara, CA jobs

    A technology industry leader in California is seeking a Product Definition Engineer to evaluate and optimize pre-production silicon. The successful candidate will work with multi-functional teams, driving new feature initiatives and designing performance-critical product features. Ideal candidates will have significant engineering experience and collaborative skills. The role offers a salary range of 168,000 - 264,500 USD depending on level, alongside equity and benefits. #J-18808-Ljbffr
    $141k-181k yearly est. 3d ago
  • Senior Oracle Cloud Financials Architect for ERP Impact

    IBM Computing 4.7company rating

    San Francisco, CA jobs

    A leading consulting firm is seeking a Senior Oracle Cloud Financials Solution Architect to join their team. In this pivotal role, you will lead client engagements, design and implement Oracle ERP solutions, and ensure successful adoption of technology. The ideal candidate will have over 10 years of ERP experience, having successfully led full lifecycle Oracle projects. A Bachelor's degree is required, along with strong communication skills and the ability to work in a fast-paced environment. This is a full-time position based in the United States. #J-18808-Ljbffr
    $115k-154k yearly est. 19h ago
  • Senior Oracle Cloud Financials ERP Architect

    IBM Computing 4.7company rating

    San Francisco, CA jobs

    A global consulting firm is seeking a Senior Solution Architect specializing in Oracle Cloud Financials in San Francisco. The role requires over 10 years of ERP implementation experience with at least two full lifecycle Oracle Cloud projects. A background in the Public Sector and knowledge of GASB Accounting are advantageous. This position offers an opportunity to work with innovative companies and make a significant impact on their cloud journeys. #J-18808-Ljbffr
    $115k-154k yearly est. 19h ago
  • Senior Oracle Cloud Financials Architect for ERP Impact

    IBM Computing 4.7company rating

    Boston, MA jobs

    A leading technology firm is seeking a Senior Oracle Cloud Financials Solution Architect to support client engagements and lead Oracle Cloud ERP implementation projects. The candidate will have over 10 years of ERP experience, specifically in Oracle Cloud, and a proven track record in solution architecture. This role requires strong communication skills and the ability to thrive in a fast-paced environment. The position is open to candidates anywhere in the United States and may involve travel for client support. #J-18808-Ljbffr
    $96k-123k yearly est. 19h ago
  • Senior ASIC Clocking Architect - PPA, Silicon Bringup

    Nvidia Corporation 4.9company rating

    Santa Clara, CA jobs

    A leading technology company in Santa Clara is looking for an experienced ASIC engineer to join their Clocks team. You will be responsible for architecting clock domains, optimizing power, performance, and area for NVIDIA chips. Ideal candidates have a BS in Electrical Engineering and experience in RTL design and logic optimization. This role offers a competitive salary and an opportunity to work in a collaborative environment. #J-18808-Ljbffr
    $151k-197k yearly est. 4d ago
  • Senior Oracle Cloud Payroll Architect

    IBM 4.7company rating

    Dallas, TX jobs

    A career in IBM Consulting is rooted by long-term relationships and close collaboration with clients across the globe. You'll work with visionaries across multiple industries to improve the hybrid cloud and AI journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio. Currently, we are looking for a highly experienced, team-oriented Senior Oracle Cloud Payroll Architect to join our talented consulting team. This is a US based, full-time position, with travel to customer site on a weekly basis. What You'll Do: Consult on best practices on Oracle Cloud Payroll policies Be an expert in the configuration of and management of the Oracle Cloud ERP Payroll applications Provide best-practice guidance on payroll business processes and implementation Support the definition and validation of various payroll related conversion activities Publish weekly status reports to the project management team Coordinate efforts between other Module resources to implement the best solution for the client Act as Oracle Cloud Payroll SME to understand the business requirements and interpret them to appropriate configurations of the Oracle Cloud Payroll module Create and update test scripts needed for functional testing Maintain system related processes and documentation and suggest changes to procedures Assist with continuous process improvement and provide insights into best practices Provide assistance in key system processes (i.e. payroll cycle management, monthly payroll accruals, garnishment and lien processing, etc.) Work with technical streams and provide guidance on integrations, conversions and reports What You'll Bring: Bachelor degree (or equivalent experience) Minimum 5 years of experience as an Oracle Cloud Payroll Lead with 2-4 years of experience in implementing Oracle Cloud Experience with public sector clients like state governments, counties and cities, considered a plus Applicants with hands-on experience with Oracle HCM Cloud Tools such as HCM Extract, HDL, PBL experience are preferred Experience with monthly and quarterly patch testing/issue resolution, perform impact analysis and testing Ability to lead complete software development lifecycle including analysis, design, configuration, programming and unit testing Assist clients with business requirements and suggest changes for process improvements Ability to lead complete software development lifecycle including analysis, design, configuration, programming and unit testing Produce end-user documentation and facilitate knowledge transfer Demonstrate strong analytical skills, problem solving/debugging skills Able to work in a fast-paced environment with a diverse group of people Capable to work independently, take initiative with minimal supervision yet can participate as a team member with a willingness to help where needed Excellent verbal and written communication, active listening and interpersonal skills Organized and detailed oriented
    $81k-105k yearly est. 5d ago
  • Senior AI Engineer, NeMo Retriever - Model Optimization and MLOps

    Nvidia 4.9company rating

    Santa Clara, CA jobs

    NVIDIA's technology is at the heart of the AI revolution, touching people across the planet by powering everything from self-driving cars and robotics to co-pilots and more. Join us at the forefront of technological advancement in intelligent assistants and information retrieval. NVIDIA NIM provides containers to self-host GPU-accelerated inferencing microservices for pre-trained and customized AI models across clouds, data centers, RTX AI PCs, and workstations. NIM microservices expose industry-standard APIs for simple integration into AI applications, development frameworks, and workflows. Built on pre-optimized inference engines from NVIDIA and the community, including NVIDIA TensorRT and TensorRT-LLM, NIM microservices optimize response latency and throughput for each combination of foundation model and GPU. NVIDIA NeMo Retriever is a collection of NIMs for building multimodal extraction, re-ranking, and embedding pipelines with high accuracy and maximum data privacy. It delivers quick, context-aware responses for AI applications like advanced retrieval-augmented generation (RAG) and Agentic AI workflows. The NeMo Retriever team is looking for an AI Engineer to join our team, focusing on the intersection of machine learning development, performance optimization, and MLOps. This role requires a unique blend of technical expertise in ML model development, system optimization, and operational excellence. We are looking for someone with a passion for working with the world's most complicated problems in Generative AI, LLM, MLLM, and RAG spaces using our innovative hardware and software platforms. You will leverage and augment existing tools that enable building NIMs, which power flexible, multi-modal retrievers and agents. If you're creative & passionate about solving real-world conversational AI problems, come join us. What You'll Be Doing: * Develop and maintain NIMs that containerize optimized models using OpenAPI standards using Python or an equivalent performant language. * Work closely with partner teams to understand requirements, build & evaluate POCs, and develop roadmaps for production-level tools * Enable development of integrated systems - AI Blueprints that provide a unified, turnkey experience. * Help build and maintain our Continuous Delivery pipeline with the goal of moving changes to production faster and safer while ensuring key operational standards. * Provide peer reviews to other specialists, including feedback on performance, scalability, and correctness. What We Need To See: * Bachelor's or Master's Degree program in Computer Science, Computer Engineering, or a related field (or equivalent experience). * 8+ years of demonstrated experience in a similar or related role * Python programming expertise with Deep Learning (DL) frameworks such as PyTorch. * Experience delivering software in a cloud context and is familiar with the patterns and processes of handling cloud infrastructure * Knowledge of MLOps technologies such as Docker-Compose, Containers, Kubernetes, Helm, data center deployments, etc. * Familiarity with ML libraries, especially PyTorch, TensorRT, or TensorRT-LLM. * Excellent in-depth hands-on understanding of NLP, LLM, MLLM, Generative AI , and RAG workflows * Self-starter with a passion for growth, enthusiasm for continuous learning, and sharing findings across the team * Extremely motivated, highly passionate, and curious about new technologies. With competitive salaries and a generous benefits package, NVIDIA is widely considered to be one of the technology world's most desirable employers. We have some of the most forward-thinking and hardworking people in the world working for us. Due to unprecedented growth, our exclusive engineering teams are rapidly growing. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 184,000 USD - 287,500 USD for Level 4, and 224,000 USD - 356,500 USD for Level 5. You will also be eligible for equity and benefits. Applications for this job will be accepted at least until January 13, 2026. This posting is for an existing vacancy. NVIDIA uses AI tools in its recruiting processes. NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
    $138k-180k yearly est. Auto-Apply 39d ago
  • ATE and SLT Senior Thermal Engineer

    Nvidia 4.9company rating

    Santa Clara, CA jobs

    We are now looking for an ATE/SLT Senior Thermal Engineer to join our team in Santa Clara, CA. This role is a fantastic opportunity to apply your expertise and be part of a world-class team dedicated to pushing the boundaries of technology. The ATE/SLT hardware group at NVIDIA delivers the interface equipment for IC package testing during final test and system level test. You will engage with highly customized high-speed sockets, active thermal plungers, and load boards, focusing especially on thermal solutions. Your efforts will be vital in creating, validating, and approving new thermal technologies that correspond with the future of IC chip roadmaps. What you'll be doing: * Introduce, develop, validate, and qualify new ATE/SLT thermal technologies in accordance with future IC chip roadmaps. * Review and approve the development of passive and active thermal control and corresponding change kits for ATE/SLT IC testing for product bring-up and production. * Provide ATE/SLT test fixture/HW solutions from building, manufacturing order, schedule monitoring, verification, and improvement. * Collect and analyze engineering data, making decisions and recommendations for improvement. * Provide global cross-functional support and drive projects with internal and external collaborators. * Apply strong hardware troubleshooting and root-cause analysis skills to provide preventive actions. * Debug and support surrounding hardware, including sockets and PCBs. * Require on-duty lab support, with occasional weekend support. What we need to see: * Bachelor's degree or equivalent experience. Degrees in EE and ME are preferred. * Over 7 years of experience in IC testing engineering and ATE/SLT hardware engineering. * Solid experience in active thermal regulation within ATE/SLT, knowledge of PID control, evaporators, test procedures, and data parsing. * Comprehension of thermal simulation reports and mechanical drawings. * Knowledge of IC testing, ATE/SLT interface hardware, maintenance, troubleshooting, and repairs. * Demonstrated problem-solving abilities along with the capacity to deliver solutions and avoid repetition. * Willingness to conduct hands-on work beyond thermal hardware, such as socket pin repair and electrical wiring. * Capability to lift 30-pound thermal plungers and/or load boards. * Understanding of electrostatic discharge (ESD) prevention and control. * Strong collaborator with excellent communication abilities and experience working across various functional areas. Ways to stand out from the crowd: * Demonstrated history of working directly with ATE/SLT thermal hardware from engineering setups to high-volume production. * Diligent approach to align with all specifications and minimize risks. * Dedication to engineering excellence through building, hands-on troubleshooting, and preventive action. With competitive salaries and a generous benefits package, NVIDIA is widely considered to be one of the technology world's most desirable employers. We have some of the most forward-thinking and hardworking people in the world working for us. This role presents an opportunity to have a wide impact at NVIDIA by improving the factory planning function. Are you creative, hard-working, dedicated, and determined? Do you love a challenge? If so, we want to hear from you! Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 152,000 USD - 253,000 USD. You will also be eligible for equity and benefits. Applications for this job will be accepted at least until January 27, 2026. This posting is for an existing vacancy. NVIDIA uses AI tools in its recruiting processes. NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
    $138k-180k yearly est. Auto-Apply 7d ago
  • Senior Timing Methodology Engineer

    Nvidia 4.9company rating

    Santa Clara, CA jobs

    NVIDIA has continuously reinvented itself over two decades. Our invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics, and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI - the next era of computing. NVIDIA is a "learning machine" that constantly evolves by adapting to new opportunities that are hard to resolve, that only we can seek, and that matter to the world. This is our life's work, to amplify human inventiveness and intelligence. We are seeking an innovative Senior Timing Methodology Engineer to help drive sign-off strategies for the world's leading GPUs and SoCs. This position is a broad opportunity to optimize performance, yield, and reliability through increasingly comprehensive modeling, informative analysis, and automation. This work will influence the entire next generation computing landscape through critical contributions across NVIDIA's many product lines ranging from consumer graphics to self-driving cars and the growing domain of artificial intelligence! We have crafted a team of highly motivated people whose mission is to push the frontiers of what is possible today and define the platform for the future of computing. If you are fascinated by the immense scale of precision, craftsmanship, and artistry required to make billions of transistors function on every die at technology nodes as deep as 5 nm and beyond, this is an ideal role. What You'll Be Doing: * Improve and validate flows for Prime-Time , Prime-Shield and Tempus STA QoR metrics for sign-off flow, and tool for high-speed designs, with focus on CAD and automation. * Develop custom flows for validating QoR of ETM models, both of std cells and custom IPs. * Develop flows/recommendations on STA sign-off to model deep submicron physical effects aging, self-heating, thermal impact, IR drop etc. * Collaborate with technology leads, VLSI physical design, and timing engineers to define and deploy the most sophisticated strategies of signing off timing in design for world-class silicon performance. * Develop tools, and methodologies to improve design performance, predictability, and silicon reliability beyond what industry standard tools can offer. * Work on various aspects of STA, constraints, timing and power optimization. What We Need To See: * MS (or equivalent experience) in Electrical or Computer Engineering with 3 years' experience in ASIC Design and Timing. * Good understanding of modeling circuits for sign-off * Good knowledge of extraction, device physics, STA methodology and EDA tools limitations. Good understanding of mathematics/physics fundamentals of electrical design. * Clear understanding of low power design techniques such as multi VT, Clock gating, Power gating, Block Activity Power, and Dynamic Voltage-Frequency Scaling (DVFS), CDC, signal/power integrity, etc. * Understanding of 3DIC, stacking, packing, self-heating and its impact on timing/STA closure. * Background with crosstalk, electro-migration, noise, OCV, timing margins. Familiarity with Clocking specs: jitter, IR drop, crosstalk, spice analysis. * Understanding of standard cells/memory/IO IP modeling and its usage in the ASIC flow. Hands-on experience in advanced CMOS technologies, design with FinFET technology 5nm/3nm/2nm and beyond. * Expertise in coding- TCL, Python. C++ is a plus. Familiarity with industry standard ASIC tools: PT, ICC, Redhawk, Tempus etc. * Strong communications skill and good standout colleague With competitive salaries and a generous benefits package, NVIDIA is widely considered to be one of the technology world's most desirable employers. We welcome you join our team with some of the most hard-working people in the world working together to promote rapid growth. Are you passionate about becoming a part of a best-in-class team supporting the latest in GPU and AI technology? If so, we want to hear from you. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 136,000 USD - 218,500 USD for Level 3, and 168,000 USD - 264,500 USD for Level 4. You will also be eligible for equity and benefits. Applications for this job will be accepted at least until January 27, 2026. This posting is for an existing vacancy. NVIDIA uses AI tools in its recruiting processes. NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
    $138k-180k yearly est. Auto-Apply 7d ago
  • Senior DFT Engineer

    Nvidia 4.9company rating

    Santa Clara, CA jobs

    NVIDIA has continuously reinvented itself over two decades. Our invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics, and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI - the next era of computing. NVIDIA is a "learning machine" that constantly evolves by adapting to new opportunities that are hard to solve, that only we can tackle, and that matter to the world. This is our life's work, to amplify human creativity and intelligence. Make the choice to join us today. NVIDIA's DFX team is looking for an exceptional DFT Engineer to help shape the future of compute. As stewards of the entire Scan Test Lifecycle, we drive innovation for the most advanced silicon in the world-spanning 2.5D/3D AI data center platforms, Gaming and Enterprise GPUs, and complex SOCs powering Autonomous Machines, Robotics, and Industrial systems. You will innovate at scale, designing and prototyping breakthrough Test Architectures for reticle sized, multi-chiplet products-from RTL to verification to post-silicon ATE bring-up. Join a globally recognized team that consistently delivers breakthrough performance across multiple high-impact tape-outs each year. What you'll be doing: * Develop and deploy Industry-leading test methodologies on NVIDIA's next-generation silicon platforms. * Collaborate with leading EDA vendors to shape tool capabilities that meet NVIDIA's ambitious design goals, and partner with internal CAD teams to drive scalable, automated solutions. * Co-architect novel DFT strategies alongside VLSI and Product Engineering teams to push the boundaries of silicon test innovation. * Own the full ATPG lifecycle-verification, coverage analysis, pattern generation, and ATE bring-up-across NVIDIA's full product portfolio. * Guide and mentor junior engineers, helping them navigate complex design trade-offs to achieve world-class quality and efficiency. What we need to see: * MS/PhD or equivalent experience in Electrical Engineering or a related field * 5+ years of hands-on experience in Design-For-Test (DFT) * Deep knowledge of DFT tools, methodologies, and test strategies for complex, large-scale designs * Strong experience with industry standard ATPG tools * Clear, effective communicator-strong written and verbal skills * Passion for mentoring and scaling technical excellence in a team Ways to stand out from the crowd: * Experience with 2.5D/3D ICs, multi-chiplet architectures, or reticle-sized designs * Background in developing or enhancing EDA tool flows * Experience with Silicon testing and Automatic Test Equipment (ATE) * Expertise in using programming languages and AI for automation * Personal success stories in leading org wide changes NVIDIA offers highly competitive salaries and a comprehensive benefits package. At NVIDIA, we work on the hardest problems, and some of the most brilliant and motivated people in their fields choose to work here. Our world-class engineering teams are growing, as we lead the AI revolution from the front. If you are ready to make an impact in this journey, we want to hear from you! #LI-Hybrid Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 168,000 USD - 264,500 USD for Level 4, and 196,000 USD - 310,500 USD for Level 5. You will also be eligible for equity and benefits. Applications for this job will be accepted at least until January 25, 2026. This posting is for an existing vacancy. NVIDIA uses AI tools in its recruiting processes. NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
    $138k-180k yearly est. Auto-Apply 9d ago
  • Senior ASIC Timing Engineer

    Nvidia 4.9company rating

    Santa Clara, CA jobs

    NVIDIA has continuously reinvented itself over two decades. Our invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics, and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI - the next era of computing. NVIDIA is a "learning machine" that constantly evolves by adapting to new opportunities which are hard to solve, that only we can pursue, and that matter to the world. This is our life's work, to amplify human inventiveness and intelligence. We are now looking for a motivated Senior ASIC Timing Engineer to join our dynamic and growing team. If you want to challenge yourself and be a part of something great, join us today! NVIDIA has continuously reinvented itself over two decades. Our invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics, and revolutionized parallel computing! More recently, GPU deep learning ignited modern AI - the next era of computing. NVIDIA is a "learning machine" that constantly evolves by adapting to new opportunities which are hard to tackle, that only we can pursue, and that matter to the world. This is our life's work, to amplify human inventiveness and intelligence. What you'll be doing: * Drive timing analysis and closure of Nvidia's GPUs, CPUs, DPUs and SoCs at block level, cluster level, and/or full chip level. * Work with PD, DFX, Clocks, and other teams in coming up with timing closure strategy, creating timing constraints, driving timing and power convergence, as well as ECO implementation * Apply knowledge and experience to improve timing convergence flows working with the methodology teams. What we need to see: * BS (or equivalent experience) in Electrical or Computer Engineering with 5+ years' experience or MS (or equivalent experience) with 2+ years' experience in Timing and STA * Hands-on experience in full-chip/sub-chip Static Timing Analysis (STA) and timing convergence, timing constraints generation and management. * Expertise in analysis and fixing of timing paths through ECOs including crosstalk and noise analysis. * Expertise and in-depth knowledge of industry standard STA and timing convergence tools. * Knowledge of deep sub-micron process nodes and hands-on experience in modeling and converging timing in these nodes. Ways to stand out from the crowd: * Background in domain specific STA and timing convergence, such as GPUs, CPUs, DPUs/Network processors, or SOCs * Understanding of DFT logic and experience with DFT timing closure for various modes e.g., scan, BIST, etc. * Understanding and timing closure of digital logic/macros in AMS designs/IPs. * Experience in methodology and/or flow development as well as automation. NVIDIA is widely considered to be the leader of AI computing, and one of the technology world's most desirable employers. We have some of the most forward-thinking and hardworking people in the world working for us. If you're creative and autonomous, we want to hear from you. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 136,000 USD - 218,500 USD for Level 3, and 168,000 USD - 264,500 USD for Level 4. You will also be eligible for equity and benefits. Applications for this job will be accepted at least until January 27, 2026. This posting is for an existing vacancy. NVIDIA uses AI tools in its recruiting processes. NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
    $138k-180k yearly est. Auto-Apply 7d ago
  • Senior Math Libraries Engineer - AI and HPC

    Nvidia 4.9company rating

    Santa Clara, CA jobs

    NVIDIA Math Libraries team is looking for a senior engineer to join our development efforts in the area of kernel generation for AI and HPC, specifically targeting matrix operations, JITing and fusions. Around the world, leading commercial and academic organizations are revolutionizing AI, scientific and engineering simulations, and data analytics, using data centers powered by GPUs. Applications of these technologies are in healthcare, NLP, VR, deep learning, autonomous vehicles and countless others. Did you know our team develops the GPU accelerated mathematical libraries that makes all of this possible? What you will be doing: Scoping, designing, and implementing high quality and performance numerical dense linear algebra software on GPUs. Owning the execution of projects involving multiple engineers and sometimes teams. Providing technical leadership and feedback to library engineers working with you on projects and sometimes mentor interns. Working closely with product management and other internal and external customers to understand feature and performance requirements and contribute to the technical roadmaps of libraries. Finding opportunities to improve library performance and reduce code maintenance overhead through re-architecting. To be successful in your responsibilities which are by nature sophisticated, you will need to find and explain complex solutions, exercise leadership, and coordinate with multiple teams to work towards your goals. What we need to see: PhD, Master's, or Bachelor's degree in Computer Science, Applied Math, or related science or engineering field of study (or equivalent experience). 8+ years of experience in designing, developing, testing, maintenance, and performance optimization of HPC software using C++. Strong fundamentals in kernel generation and composable library design for linear algebra. Leadership skills in driving software development projects. Strong collaboration, communication, and documentation habits. Kernel generation. JIT focus/experience desired Ways to stand out from the crowd: Experience with parallel programming, ideally using CUDA, MPI, OpenMP, OpenACC, pthreads. Good understanding of Machine Learning and Deep Learning technologies as well as knowledge of GPU (preferred) or CPU hardware architecture. Experience with low level programming using assembly for performance optimization and operator fusion is a huge plus. Experience with agile software development practices using project management tools such as JIRA. A scripting language, preferably Python. With a competitive salary package and benefits, NVIDIA is widely considered to be one of the technology world's most desirable employers. We have some of the most forward-thinking and hardworking people in the world working for us. Are you a creative and autonomous GenAI Engineer, who loves challenges? Do you have a genuine passion for advancing the state of AI & machine learning across a variety of industries? If so, we want to hear from you. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 184,000 USD - 287,500 USD for Level 4, and 224,000 USD - 356,500 USD for Level 5. You will also be eligible for equity and benefits. Applications for this job will be accepted at least until January 13, 2026. This posting is for an existing vacancy. NVIDIA uses AI tools in its recruiting processes. NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
    $138k-180k yearly est. Auto-Apply 22d ago
  • Senior DFT Engineer

    Nvidia 4.9company rating

    Santa Clara, CA jobs

    NVIDIA has continuously reinvented itself over two decades. Our invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics, and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI - the next era of computing. NVIDIA is a “learning machine” that constantly evolves by adapting to new opportunities that are hard to solve, that only we can tackle, and that matter to the world. This is our life's work, to amplify human creativity and intelligence. Make the choice to join us today. NVIDIA's DFX team is looking for an exceptional DFT Engineer to help shape the future of compute. As stewards of the entire Scan Test Lifecycle, we drive innovation for the most advanced silicon in the world-spanning 2.5D/3D AI data center platforms, Gaming and Enterprise GPUs, and complex SOCs powering Autonomous Machines, Robotics, and Industrial systems. You will innovate at scale, designing and prototyping breakthrough Test Architectures for reticle sized, multi-chiplet products-from RTL to verification to post-silicon ATE bring-up. Join a globally recognized team that consistently delivers breakthrough performance across multiple high-impact tape-outs each year. What you'll be doing: Develop and deploy Industry-leading test methodologies on NVIDIA's next-generation silicon platforms. Collaborate with leading EDA vendors to shape tool capabilities that meet NVIDIA's ambitious design goals, and partner with internal CAD teams to drive scalable, automated solutions. Co-architect novel DFT strategies alongside VLSI and Product Engineering teams to push the boundaries of silicon test innovation. Own the full ATPG lifecycle-verification, coverage analysis, pattern generation, and ATE bring-up-across NVIDIA's full product portfolio. Guide and mentor junior engineers, helping them navigate complex design trade-offs to achieve world-class quality and efficiency. What we need to see: MS/PhD or equivalent experience in Electrical Engineering or a related field 5+ years of hands-on experience in Design-For-Test (DFT) Deep knowledge of DFT tools, methodologies, and test strategies for complex, large-scale designs Strong experience with industry standard ATPG tools Clear, effective communicator-strong written and verbal skills Passion for mentoring and scaling technical excellence in a team Ways to stand out from the crowd: Experience with 2.5D/3D ICs, multi-chiplet architectures, or reticle-sized designs Background in developing or enhancing EDA tool flows Experience with Silicon testing and Automatic Test Equipment (ATE) Expertise in using programming languages and AI for automation Personal success stories in leading org wide changes NVIDIA offers highly competitive salaries and a comprehensive benefits package. At NVIDIA, we work on the hardest problems, and some of the most brilliant and motivated people in their fields choose to work here. Our world-class engineering teams are growing, as we lead the AI revolution from the front. If you are ready to make an impact in this journey, we want to hear from you! #LI-Hybrid Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 168,000 USD - 264,500 USD for Level 4, and 196,000 USD - 310,500 USD for Level 5. You will also be eligible for equity and benefits. Applications for this job will be accepted at least until January 25, 2026. This posting is for an existing vacancy. NVIDIA uses AI tools in its recruiting processes. NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
    $138k-180k yearly est. Auto-Apply 10d ago
  • Senior GenAI Algorithms Engineer - Model Optimizations for Inference

    Nvidia 4.9company rating

    Santa Clara, CA jobs

    NVIDIA is at the forefront of the generative AI revolution! The Algorithmic Model Optimization Team specifically focuses on optimizing generative AI models such as large language models (LLM) and diffusion models for maximal inference efficiency using techniques ranging from quantization, speculative decoding, sparsity, distillation, pruning to neural architecture search, and streamlined deployment strategies with open-sourced inference frameworks. Seeking a Senior Deep Learning Algorithms Engineer to improve innovative generative AI models like LLMs, VLMs, multimodal and diffusion models. In this role, you will design, implement, and productionize model optimization algorithms for inference and deployment on NVIDIA's latest hardware platforms. The focus is on ease of use, compute and memory efficiency, and achieving the best accuracy-performance tradeoffs through software-hardware co-design. Your work will span multiple layers of the AI software stack-ranging from algorithm design to integration-within NVIDIA's ecosystem (TensorRT Model Optimizer, NeMo/Megatron, TensorRT-LLM) and open-source frameworks (PyTorch, Hugging Face, vLLM, SGLang). You may also dive deeper into GPU-level optimization, including custom kernel development with CUDA and Triton. This role offers a unique opportunity to work at the intersection of research and engineering, pushing the boundaries of large-scale AI optimization. We are looking for passionate engineers with strong foundations in both machine learning and software systems/architecture who are eager to make a broad impact across the AI stack. What you'll be doing: Design and build modular, scalable model optimization software platforms that deliver exceptional user experiences while supporting diverse AI models and optimization techniques to drive widespread adoption. Explore, develop, and integrate innovative deep learning optimization algorithms (e.g., quantization, speculative decoding, sparsity) into NVIDIA's AI software stack, e.g., TensorRT Model Optimizer, NeMo/Megatron, and TensorRT-LLM. Deploy optimized models into leading OSS inference frameworks and contribute specialized APIs, model-level optimizations, and new features tailored to the latest NVIDIA hardware capabilities. Partner with NVIDIA teams to deliver model optimization solutions for customer use cases, ensuring optimal end-to-end workflows and balanced accuracy-performance trade-offs. Conduct deep GPU kernel-level profiling to identify and capitalize on hardware and software optimization opportunities (e.g., efficient attention kernels, KV cache optimization, parallelism strategies). Drive continuous innovation in deep learning inference performance to strengthen NVIDIA platform integration and expand market adoption across the AI inference ecosystem. What we need to see: Master's, PhD, or equivalent experience in Computer Science, Artificial Intelligence, Applied Mathematics, or a related field. 5+ years of relevant work or research experience in deep learning. Strong software design skills, including debugging, performance analysis, and test development. Proficiency in Python, PyTorch, and modern ML frameworks/tools. Proven foundation in algorithms and programming fundamentals. Strong written and verbal communication skills, with the ability to work both independently and collaboratively in a fast-paced environment. Ways to stand out from the crowd: Contributions to PyTorch, JAX, vLLM, SGLang, or other machine learning training and inference frameworks. Hands-on experience training or fine-tuning generative AI models on large-scale GPU clusters. Proficient in GPU architectures and compilation stacks, adept at analyzing and debugging end-to-end performance. Familiarity with NVIDIA's deep learning SDKs (e.g., TensorRT). Experience developing high-performance GPU kernels for machine learning workloads using CUDA, CUTLASS, or Triton. Increasingly known as “the AI computing company” and widely considered to be one of the technology world's most desirable employers, NVIDIA offers highly competitive salaries and a comprehensive benefits package. Are you creative, motivated, and love a challenge? If so, we want to hear from you! Come, join our model optimization group, where you can help build real-time, cost-effective computing platforms driving our success in this exciting and rapidly-growing field. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 152,000 USD - 218,500 USD for Level 3, and 184,000 USD - 287,500 USD for Level 4. You will also be eligible for equity and benefits. Applications for this job will be accepted at least until January 13, 2026. This posting is for an existing vacancy. NVIDIA uses AI tools in its recruiting processes. NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
    $138k-180k yearly est. Auto-Apply 22d ago

Learn more about Philips jobs

View all jobs