New driver errors with 44.00009

@nayelyz @robert.masen

My hub updated a few days ago. Since then I’m starting to see new errors in driver logs that I have not seen before:

From one driver:

2022-09-13T15:55:09.769743318+00:00 FATAL UPnP Explorer  RPC Status: status: Internal, message: "error in scripting-engine broker", details: [], metadata: MetadataMap { headers: {"content-type": "application/grpc", "date": "Tue, 13 Sep 2022 15:55:09 GMT"} }
Traceback:
stack traceback:
	[C]: in function 'coroutine.yield'
	[string "util.call_rpc_method"]:18: in upvalue 'call_rpc_method'
	[string "socket"]:1603: in field 'select'
	[string "cosock.lua"]:359: in field 'run'
	[string "st/driver.lua"]:762: in method 'run'
	[string "init.lua"]:774: in main chunk

From another driver:

2022-09-16T23:57:41.044015508+00:00 FATAL Edge Weather V1  RPC Status: status: Internal, message: "error in scripting-engine broker", details: [], metadata: MetadataMap { headers: {"content-type": "application/grpc", "date": "Fri, 16 Sep 2022 23:57:40 GMT"} }
Traceback:
stack traceback:
	[C]: in function 'coroutine.yield'
	[string "coroutine"]:9: in function 'wrapped_coroutine_resume'
	[string "cosock.lua"]:265: in field 'run'
	[string "st/driver.lua"]:762: in method 'run'
	[string "init.lua"]:412: in main chunk
2022-09-16T23:57:46.465172633+00:00 WARN Edge Weather V1  Unexpected filesystem lookup for capability partyvoice23922.windbearing

Both FATAL errors caused the driver to restart.

The first driver kept hitting the same error after each restart and was in an infinite loop so I had to delete all devices and uninstall it. This for a driver that has been running for nearly a year without a problem.

The second driver was able to recover and proceed normally after the first restart. However also note this additional error that occurred:

WARN Edge Weather V1 Unexpected filesystem lookup for capability partyvoice23922.windbearing

I have seen that warning a couple times now just as a driver was shutting down and about to be restarted.

EDIT : I just submitted hub logs; should be able to catch the second driver, but the first time this happened with that first driver was a couple days ago.

1 Like

I had a similar issue that was causing an auto-restart. I was able to resolve it by installing a version of the driver that didn’t do anything instead of uninstalling everything. Giant pain when you get caught in that loop. I was able to then reinstall a modified driver and debug it from there.

As for these specific errors, I have not yet seen these in any of my LAN drivers. Haven’t seen that hub firmware yet though. Do you know which calls you are making that trigger these?

It’s not clear at what point in my drivers’ code these are happening. They both are LAN-based drivers: the looping one has multiple threads and servers running, but the other one just does simple HTTP requests on a periodic basis.

For the first driver, it just so happened that that driver kicks off a voice announcement each time it starts, so that was how I even knew it was in a death loop.

Seeing this as well on mine. Can’t tell what’s happening.

1 Like

it seems like the ST raised the level of errors. It seems like before there was a catch around the user’s code and the driver continued to work after errors. Now it crashes with a strange stack trace

I verified my hub version and am not seeing these errors on my LAN drivers. My account of the “restart loop” was on all versions with any fatal error that occurs during initialization. I haven’t seen the errors that are the root cause of your particular flavor.

Hi all,
Thanks for the bug reports, it appears we have 2 issues, first is that we are reporting the software version incorrectly, are you seeing version 44.00009 in the SmartThings app and/or graph?

The other issue is that we seem to have introduced a bug into the socket receive path that is causing a receive on closed sockets to be treated as a more severe error than previously. We already have a fix in place so the next dot release update should fix this issue. Sorry for the inconvenience!

4 Likes

Both, and also via the API.

Thanks for the comeback!

@TAustin @robert.masen

Is there a work around for this? A way to catch and supress the error?

I am hitting the same error/issue now in a driver I am working on

2022-09-23T07:55:50.741967623+00:00 PRINT Wiser Bridge  1       200     HTTP/1.1 200 OK
2022-09-23T07:55:50.757090623+00:00 FATAL Wiser Bridge  RPC Status: status: Internal, message: "error in scripting-engine broker", details: [], metadata: MetadataMap { headers: {"content-type": "application/grpc", "date": "Fri, 23 Sep 2022 07:55:50 GMT"} }
Traceback:
stack traceback:
        [C]: in function 'coroutine.yield'
        [string "coroutine"]:9: in function 'wrapped_coroutine_resume'
        [string "cosock.lua"]:265: in field 'run'
        [string "st/driver.lua"]:739: in method 'run'
        [string "init.lua"]:72: in main chunk

Update: If it helps I have tracked down the line of code that causes this in my driver to be

driver:try_create_device(metadata)

Update2: I solved it, it seems that error happens whenever there is some kind of failure when calling the above line of code, my mistake was one of the properties of the metadata was an integer when it should have been a string.

1 Like

Nice resolution. Thanks for following up with that. I was wondering why I wasn’t able to reproduce it.