We detect you are using an unsupported browser. For the best experience, please visit the site using Chrome, Firefox, Safari, or Edge. X

Need Help?

Privacy Policy

Live Chat

Why ML?


ML is a set of algorithmic methods that discovers patterns from seemingly unrelated data, providing you with important information to facilitate decision making. Most industries utilize data in product development, process improvement, quality management, risk assessment and other areas; your usage of data can determine whether you rise above the competition or fall behind.

Why on Edge?


Although ML on cloud has been utilized for quite some time, many applications don’t require an online server in between the communicating devices. ML applications on the edge can deliver   inferences on the field offline and in real time.

ML on edge makes the system power efficient, fast and secure. User privacy is at the forefront because personal data never leaves your device. ML on edge also saves cloud resources and compute power in storing and maintaining data pipelines.

Why Microchip?


We offer 8-, 16- and 32-bit microcontrollers (MCUs), microprocessors (MPUs) and Field-Programmable Gate Arrays (FPGAs). With a simple ML design process that can bring an ML engine to each of these systems quickly and efficiently, we offer solutions for a wide spectrum of users such as embedded systems engineers and data scientists. Our AutoML-powered design process automates the steps to build the ML model and will go through multiple iterations until a satisfactory model is identified.

Streamline Your ML Model Development


Depending on the complexity of your design, you can use an MCU, MPU or FPGA. Whether you are working on vibration detection on an 8-bit MCU or image detection and classification on an MPU or FPGA, you can rely on our MPLAB® Machine Learning Development Suite to automate each step of the ML flow process and generate AutoML-powered code for many use cases. 

Our software toolkits allow the use of popular ML frameworks including TensorFlow, Keras, Caffe and many others covered by the ONNX umbrella as well as those found within TinyML and TensorFlow Lite. This combination of hardware and software enables you to design a variety of applications including high-performance AI acceleration cards for data centers, self-driving cars, security and surveillance, electronic fences, augmented and virtual reality headsets, drones, robots, satellite imagery and communication centers.

Discover how our proven reference designs and network of experienced partners can help you reduce risk, time to market, power consumption and application costs.

Microchip Silicon Platforms for Machine Learning Data Flow


Build Your Own Model (MCUs and MPUs)


Our MPLAB Machine Learning Development Suite can help you build efficient, low-footprint ML models that can be flashed directly to Microchip MCUs and MPUs. Because this development suite is powered by AutoML, you can say good-bye to repetitive, tedious and time-consuming model building. With feature extraction, training, validation and testing, this development suite optimizes models to satisfy the memory constraints of MCUs and MPUs. The API is fully convertible to Python and they can be used interchangeably in the model development process.

Bring Your Own Model (MCUs and MPUs)


You can easily bring your existing Deep Neural Network (DNN) model to an MCU or MPU device. After converting a TensorFlow model to a TensorFlow Lite model, you can load the model to the device’s Flash memory for inference. MPLAB Harmony V3 can help you add the ML run-time engine and integrate it with other peripherals. 

Bring Your Own Model (FPGAs)


The process flow for FPGAs is the same as for MCUs and MPUs but for FPGAs, our state-of-the-art VectorBlox™ Accelerator Software Development Kit (SDK) is used to convert a high-level DNN to its lighter version such as TensorFlow Lite and to deploy it on the target device.

Neural nets are easy to program and power efficient even if you don’t have prior FPGA design experience. VectorBlox SDK comes with instructions to build a smart AI camera platform based on the PolarFire® FPGA video kit so that you can evaluate different Convolutional Neural Networks (CNNs).

VectorBlox Matrix Processor IP (MXP) and CNN accelerators speed up complex DL algorithms. The VectorBlox Accelerator SDK is currently available to participants in our early access program; send us an email  if you would like to participate in the early access program.

 

Smart Embedded Vision With Machine Learning

Smart Predictive Maintenance

Smart Human Machine Interfaces

Machine Learning Workstations, Servers and Appliances

Ready to Get Started with Machine Learning?


Getting Started With Microchip MCUs


Download Your Software

If you are using one of our MCUs or MPUs, our MPLAB® development ecosystem seamlessly integrates with our development boards and the software kits and solutions provided by our Machine Learning design partners. These tools include:

Explore Our Machine Learning Tutorials, Example Applications and Other Information

Buy a Development Board

Need Some Help?


We are here to support you. Contact our Client Success Team to get assistance with your design.











AI/Machine Learning Partners


We’ve partnered with the industry’s leading design houses to bring state-of-the art artificial intelligence (AI)-based solutions and software tools that support our portfolio of silicon products. We work closely with these partners to ensure seamless interface and tight integration with our platforms to provide you with a superior development experience.