Edge Driver Architecture and Matter Support (technical discussion)

It is very sad that SmartThings, which originally was the industry leader in supporting the matter standard (I believe they were the first matter controller to support matter version 1.2) has now, because of the approach they have taken with edge drivers, fallen pretty far behind Apple and Home Assistant in matter support.

One of matter’s primary goals was to make onboarding simple for the everyday customer with no technical knowledge. :thinking:

The failing to complete onboarding is as bad as the Aeotec zwave Wallmote, which had 12 functions with pretty much every other certified Z wave controller, but only 8 with SmartThings. For the same reason: SmartThings has never supported multi endpoint devices in the way they are designed by the independent third-party standards.

They have a design culture which, for reasons which have never been entirely clear, except perhaps, for the fact that the original CEO was not an engineer, has not started with the specification and then designed an architecture that would support that. Instead, they typically start with an individual device, design to that, and then try to make modifications as it turns out that other devices with a different number of endpoints or different features don’t work properly. :disappointed_relieved:

7 Likes

… probing, discovering, guessing EPs and clusters as if there weren’t a comprehensive, complete database already: the CSA.

Vendor ID, Product ID, Primary Device Type ID, Endpoints, Clusters and more - machine readable if necessary.

The ST architecture should keep an own relational database with fingerprints and all the information needed, fed by external databases.

2 Likes

Relaying on fingerprints only was the main issue with Zigbee and the need for custom drivers for pretty much everything. Even if you had a device exactly the same as another one, if the fingerprint didn’t match it would not work. And that’s why Mariano got tired of people asking to add fingerprints :slight_smile:

With Matter they’re taking a more generic approach, there are fingerprints for WWST devices but sometimes they’re overwritten by the discovery mechanism which is actually better since different firmwares may have different features or report different events.

However, they’re constrained by the architecture of Edge drivers, profiles and capabilities. Profiles should be dynamic instead of manually pre-created, capabilities for Matter devices should match Matter features (so many functions not exposed because of capabilities definitions lacking commands and events) and a device handler should support multiple drivers, each specialized in each capability / cluster.

Discovery is still needed to account for bridged devices and non certified devices too, like the Matter bridge for Home Assistant.

6 Likes

But only invoked when there’s no matching fingerprint.

Imagine auto-generated fingerprints from the CSA database, daily updated, that include information about EPs, client/server clusters for a start. Ever seen the logs when you add a simple Matter TRV? “Nope, no humidity sensor, let’s try and see if the device has pm2.5” and so on.

When I see the logs when it tries to figure out what the device is capable of, I want to scream: “just look it up in the official database! It has been probed already!”.

What is the CSA database / compliance document (in XML) good for? For a human to read, to look up what exceptions need to be hardcoded for an Aqara device in the src/init.lua of a Matter driver?


It’s a bit like running nmap (port scan) on a server to figure out if it’s a mail or a web server and what port is what.

1 Like

This is exactly what I mean by designing for a device rather than the specification.

If the SmartThings architecture engineers worked on keeping up with the specification, and the onboarding was based on that perspective, we’d never run into issues like the SmartThings max for something being smaller than the specification max.

6 Likes

We should rewrite the whole Edge Drivers from scratch and submit the mother of all pull requests called “Edge Drivers 2.0” by the end of the year.

Description

No discussion. Take it or leave it.

5 Likes

All that information is already provided by the device in the Descriptor cluster and the FeatureMap. And the device lists what it supports, it’s not that you have to ask “do you support this? and this? and this?”

I can get that a common model is needed to hide technical stuff from the user so a Matter light, a Zigbee light or a cloud light have the same user interface and a common feature-set. Just like Home Assistant has entities, SmartThings has capabilities, perfect.

Some capabilities have evolved like the temperature colour for lights that is now able to dynamically change its range to match the device, well done! Or buttons can display only the actions that the button supports like pressed, held, etc.

But, at the same time, feels like some capabilities are stuck in the past. You can set an absolute brightness but not a relative one, you can set a transition time when changing brightness but not when changing temperature, you can have an action for holding a button but there’s no release action, or some actions have no user interface and you need the Rules API, even if they are as simple as increasing the volume of a Sonos speaker.

Instead of including new capabilities that don’t add anything and no device uses like activityLightingMode which has breathing exercises (really) maybe revamp the basic capabilities a little.

4 Likes

I’ve heard that the author of the first driver that uses activityLightingMode (aka wimHofMode) wins a trip to the ST headquarters.

sleepPreparation - Get a breathing guide to help you relax and fall asleep. Breathe in for 4 seconds as the light gets brighter, hold your breath for 7 seconds when the light is steady, and breathe out for 8 seconds as the light gets darker. The light will turn off after 4 breaths.

2 Likes

I actually included it in a light just for fun a few months ago to see how it looks, it’s just a 2x4 or 3x3, or something like that, table of buttons. I was suprised it had a user interface explaining the breathing exercise.

However, it overlaps with modes that already exist and it has a problem, when you pick something I was not able to unpick it. If an actual driver used it, it would display the last “scene” even if it’s no longer active. And, of course, the sleep preparation thing needs to be implemented by the driver and that’s a lot of timers and transitions.

1 Like

Why would you think there aren’t any devices using activityLightingMode? The capability has been around longer than Edge drivers have. It is easy to imagine it corresponding with settings on poncy lighting products. It must have had some use to have made it to ‘live’ status.