Regelmäßiges "Aufhängen" der RaspberryMatic

Einrichtung, Nutzung und Hilfe zu RaspberryMatic (OCCU auf Raspberry Pi)

Moderatoren: jmaus, Co-Administratoren

automat
Beiträge: 4
Registriert: 01.04.2015, 20:08

Regelmäßiges "Aufhängen" der RaspberryMatic

Beitrag von automat » 02.01.2020, 00:14

Guten Abend Zusammen,

zunächst noch "frohes Neues" in die Runde! Bisher war ich fleißiger "Mitleser", heute möchte ich freundlich um Eure Unterstützung bitten.

Meine RaspberryMatic macht mir aktuell große Sorgen. Seit einigen Wochen hängt sich das System sehr häufig auf.

Die Problem-Situation ist:
- es werden keine Programme mehr ausgeführt
- es lassen sich keine Geräte mehr über die TinyMatic-App oder das Web-Frontend an-/ausschalten
- kein Zugriff über GUI der WebUI mehr möglich
- Login erscheint, anschließend wird das obere Frame geladen, ansonsten nur Loading-Spinner,
- ab und zu lädt der Watchdog noch und zeigt "rfd restarted" an
- der Zugriff über Putty ist weiterhin möglich, ebenso lassen sich die URL von CUXD im Browser noch öffnen und die jeweiligen Logs anzeigen.

Heilung:
- 90% der Fälle: nur durch Vom-Strom-Nehmen des PIs und erneutes Wieder-Einstecken
- 10% der Fälle: Selbstheilung nach einiger Zeit (mal nach 1h, mal nach mehreren Stunden)

Hardware:
- Raspberry PI 3 B+
- Funkmodulplatine RPI-RF-MOD
- ELV Steckernetzteil mit 2,5 A
- Antennen-Mod mit externer Stabantenne
- außerdem im Haus verbaut: Fritzbox, Netzwerkswitches, IP-Cam, Synology NAS

Firmware:
Auffällig mit folgenden Firmware-Versionen
- 3.47.18.20190918
- 3.47.22.20191130
- 3.49.17.20191225

Homematic-Ausstattung:
- diverse Programme
- ca. 40 Homematic-Funk-Komponenten
- Zusatzsoftware: XML API (1.2.0), CUXD (2.3.4), CUXCHART (Highcharts) (1.4.5)
- keine LAN-Adapter
- keine Repeater

Firewalleinstellungen RaspberryMatic:
Ports blockiert,
Homematic XML-RPC API eingeschränkt,
Remote Homematic-Script API eingeschränkt,
Mediola Kein Zugriff,
Port-Freigabe einzelne Ports gesetzt,
IP-Adressen 192.168.0.0/120

Was habe ich schon versucht:
- SD-Karte gegen eine andere getauscht
- Komplette Neuinstallation des RaspberryMatic-Images und Einspielen eines Backups
- Up-/Downgrade auf neuere/ältere Firmwares

Wann hängt sich das System auf:
- keinerlei Logik zu erkennen
- verschiedene Tag-/Nacht-Zeitpunkte
- teilweise auch bei tagelanger Abwesenheit ohne Interaktion mit dem System

Log-Wünsche aus gesondertem Thread:

Code: Alles auswählen

# uptime
 00:01:07 up  5:59,  1 users,  load average: 0.00, 0.00, 0.00
# free
              total        used        free      shared  buff/cache   available
Mem:         991988      146576      749308        2156       96104      836688
Swap:             0           0           0
# ps
PID   USER     TIME  COMMAND
    1 root      0:00 init
    2 root      0:00 [kthreadd]
    3 root      0:00 [rcu_gp]
    4 root      0:00 [rcu_par_gp]
    7 root      0:00 [kworker/u8:0-br]
    8 root      0:00 [mm_percpu_wq]
    9 root      0:00 [ksoftirqd/0]
   10 root      0:10 [rcu_preempt]
   11 root      0:00 [rcu_sched]
   12 root      0:00 [rcu_bh]
   13 root      0:00 [migration/0]
   14 root      0:00 [cpuhp/0]
   15 root      0:00 [cpuhp/1]
   16 root      0:00 [migration/1]
   17 root      0:00 [ksoftirqd/1]
   20 root      0:00 [cpuhp/2]
   21 root      0:00 [migration/2]
   22 root      0:00 [ksoftirqd/2]
   25 root      0:00 [cpuhp/3]
   26 root      0:00 [migration/3]
   27 root      0:00 [ksoftirqd/3]
   29 root      0:00 [kworker/3:0H-kb]
   30 root      0:00 [kdevtmpfs]
   31 root      0:00 [netns]
   32 root      0:00 [rcu_tasks_kthre]
   36 root      0:00 [khungtaskd]
   37 root      0:00 [oom_reaper]
   38 root      0:00 [writeback]
   39 root      0:00 [kcompactd0]
   40 root      0:00 [ksmd]
   41 root      0:00 [crypto]
   42 root      0:00 [kblockd]
   43 root      0:00 [watchdogd]
   44 root      0:00 [rpciod]
   45 root      0:00 [kworker/u9:0]
   46 root      0:00 [xprtiod]
   49 root      0:00 [kswapd0]
   50 root      0:00 [nfsiod]
   62 root      0:00 [kthrotld]
   63 root      0:00 [iscsi_eh]
   64 root      0:00 [dwc_otg]
   66 root      0:00 [DWC Notificatio]
   67 root      0:00 [vchiq-slot/0]
   68 root      0:00 [vchiq-recy/0]
   69 root      0:00 [vchiq-sync/0]
   72 root      0:00 [irq/86-mmc1]
   74 root      0:00 [mmc_complete]
   75 root      0:00 [kworker/0:1H-mm]
   77 root      0:00 [irq/166-usb-001]
   78 root      0:00 [kworker/2:1H-kb]
   79 root      0:00 [jbd2/mmcblk0p2-]
   80 root      0:00 [ext4-rsv-conver]
   84 root      0:00 [kworker/1:1H-kb]
   92 root      0:00 [jbd2/mmcblk0p3-]
   93 root      0:00 [ext4-rsv-conver]
   97 root      0:00 /usr/bin/psplash -n
   99 root      0:00 [kworker/3:2H-kb]
  104 root      0:00 /sbin/watchdog -T 300 -t 5 /dev/watchdog
  115 root      0:00 [vchiq-keep/0]
  227 root      0:24 /bin/hss_led -l 6
  250 root      0:29 /sbin/udevd -d
  272 root      0:00 [spi0]
  286 root      0:00 [cfg80211]
  288 root      0:00 [brcmf_wq/mmc1:0]
  290 root      0:00 [brcmf_wdog/mmc1]
  345 root      0:02 /usr/sbin/irqbalance
  353 root      0:01 /usr/sbin/rngd
  362 dbus      0:00 dbus-daemon --system
  404 root      0:00 /sbin/syslogd -n -m 0 -s 4096 -b 1 -D
  407 root      0:00 /sbin/klogd -n
  432 root      0:00 [ipv6_addrconf]
  435 root      0:00 /sbin/udhcpc -b -t 100 -T 3 -S -x hostname:homees15 -i eth
  554 root      0:02 /usr/sbin/ifplugd -i wlan0 -MwI -u5 -d5
  558 root      0:21 /usr/sbin/ifplugd -i eth0 -fI -u0 -d10
  569 root      0:00 /usr/sbin/chronyd
  599 root      0:00 /bin/eq3configd
  606 root      0:00 /usr/sbin/lighttpd-angel -f /etc/lighttpd/lighttpd.conf -D
  607 root      0:29 /usr/sbin/lighttpd -f /etc/lighttpd/lighttpd.conf -D
  616 root      0:01 /bin/ssdpd
  622 root      0:00 /usr/sbin/sshd
  717 root      0:04 /bin/multimacd -f /var/etc/multimacd.conf -l 5
  788 root      1:33 java -Xmx128m -Dos.arch=armv7l -Dlog4j.configuration=file:
  978 root      0:00 /usr/sbin/crond -f -l 9
  992 root      1:01 /usr/bin/monit -Ic /etc/monitrc
  993 root      0:00 init
 3324 root      0:00 [kworker/2:3-mm_]
 5084 root      0:00 sshd: root@notty
 5086 root      0:00 /usr/libexec/sftp-server
 7660 root      0:00 [kworker/u8:1-ev]
 8596 root      0:00 [kworker/3:0-mm_]
 8625 root      0:00 [kworker/1:0H]
 9722 root      0:00 /usr/local/addons/cuxd/cuxd
 9733 root      0:00 [kworker/1:1-eve]
 9881 root      0:00 [kworker/0:2-eve]
 9951 root      0:00 [kworker/2:2H]
 9962 root      0:00 [kworker/0:2H]
 9963 root      0:00 [kworker/0:3-eve]
 9968 root      0:03 /bin/ReGaHss -f /etc/rega.conf -l 2
10225 root      0:00 [kworker/3:1-eve]
10241 root      0:02 /bin/rfd -f /var/etc/rfd.conf -l 5
10387 root      0:00 [kworker/2:1-eve]
10427 root      0:00 [kworker/1:0-eve]
10642 root      0:00 [kworker/0:0-mm_]
10998 root      0:00 [kworker/0:1-eve]
11148 root      0:00 sshd: root@pts/0
11155 root      0:00 -sh
11165 root      0:00 [test]
11166 root      0:00 [test]
11167 root      0:00 [test]
11168 root      0:00 [test]
11173 root      0:00 [sh]
11178 root      0:00 [test]
11179 root      0:00 [test]
11180 root      0:00 [test]
11181 root      0:00 [grep]
11184 root      0:00 [sh]
11185 root      0:00 [grep]
11186 root      0:00 [test]
11189 root      0:00 [test]
11190 root      0:00 [sh]
11191 root      0:00 [test]
11203 root      0:00 ps
# cat /proc/meminfo
MemTotal:         991988 kB
MemFree:          749848 kB
MemAvailable:     837228 kB
Buffers:            7700 kB
Cached:            88412 kB
SwapCached:            0 kB
Active:           153716 kB
Inactive:          52576 kB
Active(anon):     112124 kB
Inactive(anon):      212 kB
Active(file):      41592 kB
Inactive(file):    52364 kB
Unevictable:           0 kB
Mlocked:               0 kB
SwapTotal:             0 kB
SwapFree:              0 kB
Dirty:                 0 kB
Writeback:             0 kB
AnonPages:        110244 kB
Mapped:            24028 kB
Shmem:              2164 kB
Slab:              20844 kB
SReclaimable:       8632 kB
SUnreclaim:        12212 kB
KernelStack:        1736 kB
PageTables:         1104 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:      495992 kB
Committed_AS:     175244 kB
VmallocTotal:    1064960 kB
VmallocUsed:           0 kB
VmallocChunk:          0 kB
Percpu:              416 kB
CmaTotal:           8192 kB
CmaFree:            6860 kB
Kernel-Log zum Aufhänge-Zeitpunkt:

Code: Alles auswählen

<6>[17704.203676] eq3loop: eq3loop_close_slave() mmd_bidcos
<6>[17704.258311] eq3loop: eq3loop_open_slave() mmd_bidcos
<6>[19523.122962] eq3loop: eq3loop_close_slave() mmd_bidcos
<6>[19523.180022] eq3loop: eq3loop_open_slave() mmd_bidcos
<6>[21140.615780] eq3loop: eq3loop_close_slave() mmd_bidcos
<6>[21140.670478] eq3loop: eq3loop_open_slave() mmd_bidcos
Sys-Log zum Aufhänge-Zeitpunkt:

Code: Alles auswählen

Jan  1 22:55:14 homees15 user.warn monit[992]: 'rfd' failed protocol test [DEFAULT] at [localhost]:32001 [TCP/IP] -- Connection timed out
Jan  1 22:55:37 homees15 user.warn monit[992]: 'rfd' failed protocol test [DEFAULT] at [localhost]:32001 [TCP/IP] -- Connection timed out
Jan  1 22:55:59 homees15 user.warn monit[992]: 'rfd' failed protocol test [DEFAULT] at [localhost]:32001 [TCP/IP] -- Connection timed out
Jan  1 22:56:22 homees15 user.warn monit[992]: 'rfd' failed protocol test [DEFAULT] at [localhost]:32001 [TCP/IP] -- Connection timed out
Jan  1 22:56:45 homees15 user.err monit[992]: 'rfd' failed protocol test [DEFAULT] at [localhost]:32001 [TCP/IP] -- Connection timed out
Jan  1 22:56:45 homees15 user.info monit[992]: 'rfd' trying to restart
Jan  1 22:56:45 homees15 user.info monit[992]: 'rfd' restart: '/etc/init.d/S61rfd restart'
Jan  1 22:56:45 homees15 user.info kernel: [17704.203676] eq3loop: eq3loop_close_slave() mmd_bidcos
Jan  1 22:56:45 homees15 user.info kernel: [17704.258311] eq3loop: eq3loop_open_slave() mmd_bidcos
Jan  1 22:57:08 homees15 user.err monit[992]: 'rfd' service restarted 1 times within 1 cycles(s) - exec
Jan  1 22:57:08 homees15 user.info monit[992]: 'rfd' exec: '/bin/triggerAlarm.tcl rfd restarted WatchDog-Alarm'
Jan  1 22:57:08 homees15 user.info monit[992]: 'rfd' process is running after previous restart timeout (manually recovered?)
Jan  1 22:57:25 homees15 user.info monit[992]: 'rfd' connection succeeded to [localhost]:32001 [TCP/IP]
Jan  1 23:12:57 homees15 auth.info sshd[5084]: Accepted password for root from 192.168.0.114 port 63462 ssh2
Jan  1 23:13:02 homees15 auth.info sshd[5089]: Accepted password for root from 192.168.0.114 port 63464 ssh2
Jan  1 23:14:59 homees15 auth.info sshd[5299]: Accepted password for root from 192.168.0.114 port 63528 ssh2
Jan  1 23:22:19 homees15 auth.info sshd[6083]: Accepted password for root from 192.168.0.114 port 63643 ssh2
Jan  1 23:23:40 homees15 auth.info sshd[6231]: Accepted password for root from 192.168.0.114 port 63655 ssh2
Jan  1 23:27:01 homees15 auth.info sshd[6623]: Accepted password for root from 192.168.0.114 port 63690 ssh2
Jan  1 23:27:04 homees15 user.err monit[992]: 'rfd' failed protocol test [DEFAULT] at [localhost]:32001 [TCP/IP] -- Connection timed out
Jan  1 23:27:04 homees15 user.info monit[992]: 'rfd' trying to restart
Jan  1 23:27:04 homees15 user.info monit[992]: 'rfd' restart: '/etc/init.d/S61rfd restart'
Jan  1 23:27:04 homees15 local0.err ReGaHss: ERROR: XmlRpc: Error in XmlRpcClient::writeRequest: write error (error 111). [error():iseXmlRpc.h:281]
Jan  1 23:27:04 homees15 local0.warn ReGaHss: WARNING: XMLRPC 'setValue': rpcClient.execute() failed (url: xmlrpc_bin://127.0.0.1:32001, params: {"MEQ1782268:1","LEVEL",0.000000}, result: nil) [CallXmlrpcMethod():iseXmlRpc.cpp:2602]
Jan  1 23:27:04 homees15 local0.err ReGaHss: ERROR: XMLRPC 'setValue' call failed (interface: 1007, params: {"MEQ1782268:1","LEVEL",0.000000}) [CallSetValue():iseXmlRpc.cpp:1505]
Jan  1 23:27:04 homees15 local0.err ReGaHss: ERROR: rpc.CallSetValue failed; address = MEQ1782268:1 [WriteValue():iseDOMdpHSS.cpp:76]
Jan  1 23:27:04 homees15 user.info kernel: [19523.122962] eq3loop: eq3loop_close_slave() mmd_bidcos
Jan  1 23:27:04 homees15 local0.err ReGaHss: ERROR: Send failed (errno=32)! [Send():iseSysLx.cpp:1596]
Jan  1 23:27:04 homees15 local0.err ReGaHss: ERROR: XmlRpc: Error in XmlRpcClient::writeRequest: write error (error 111). [error():iseXmlRpc.h:281]
Jan  1 23:27:04 homees15 local0.warn ReGaHss: WARNING: XMLRPC 'setValue': rpcClient.execute() failed (url: xmlrpc_bin://127.0.0.1:32001, params: {"MEQ0485236:1","STATE",false}, result: nil) [CallXmlrpcMethod():iseXmlRpc.cpp:2602]
Jan  1 23:27:04 homees15 local0.err ReGaHss: ERROR: XMLRPC 'setValue' call failed (interface: 1007, params: {"MEQ0485236:1","STATE",false}) [CallSetValue():iseXmlRpc.cpp:1505]
Jan  1 23:27:04 homees15 local0.err ReGaHss: ERROR: rpc.CallSetValue failed; address = MEQ0485236:1 [WriteValue():iseDOMdpHSS.cpp:76]
Jan  1 23:27:04 homees15 local0.err ReGaHss: ERROR: Send failed (errno=32)! [Send():iseSysLx.cpp:1596]
Jan  1 23:27:04 homees15 user.info kernel: [19523.180022] eq3loop: eq3loop_open_slave() mmd_bidcos
Jan  1 23:27:26 homees15 user.err monit[992]: 'rfd' service restarted 1 times within 1 cycles(s) - exec
Jan  1 23:27:26 homees15 user.info monit[992]: 'rfd' exec: '/bin/triggerAlarm.tcl rfd restarted WatchDog-Alarm'
Jan  1 23:27:26 homees15 user.info monit[992]: 'rfd' process is running after previous restart timeout (manually recovered?)
Jan  1 23:27:44 homees15 user.info monit[992]: 'rfd' connection succeeded to [localhost]:32001 [TCP/IP]
Jan  1 23:33:29 homees15 user.warn monit[992]: 'ReGaHss' failed protocol test [HTTP] at [localhost]:8183 [TCP/IP] -- HTTP: Error receiving data -- Resource temporarily unavailable
Jan  1 23:33:52 homees15 user.warn monit[992]: 'ReGaHss' failed protocol test [HTTP] at [localhost]:8183 [TCP/IP] -- HTTP: Error receiving data -- Resource temporarily unavailable
Jan  1 23:34:15 homees15 user.warn monit[992]: 'ReGaHss' failed protocol test [HTTP] at [localhost]:8183 [TCP/IP] -- HTTP: Error receiving data -- Resource temporarily unavailable
Jan  1 23:34:38 homees15 user.warn monit[992]: 'ReGaHss' failed protocol test [HTTP] at [localhost]:8183 [TCP/IP] -- HTTP: Error receiving data -- Resource temporarily unavailable
Jan  1 23:35:01 homees15 user.err monit[992]: 'ReGaHss' failed protocol test [HTTP] at [localhost]:8183 [TCP/IP] -- HTTP: Error receiving data -- Resource temporarily unavailable
Jan  1 23:35:01 homees15 user.info monit[992]: 'ReGaHss' trying to restart
Jan  1 23:35:01 homees15 user.info monit[992]: 'ReGaHss' restart: '/etc/init.d/S70ReGaHss restart'
Jan  1 23:35:04 homees15 user.info ReGaHss: SIGTERM: ReGaHss Halting
Jan  1 23:35:29 homees15 user.err monit[992]: 'ReGaHss' service restarted 1 times within 1 cycles(s) - exec
Jan  1 23:35:29 homees15 user.info monit[992]: 'ReGaHss' exec: '/bin/triggerAlarm.tcl ReGaHss restarted WatchDog-Alarm'
Jan  1 23:35:29 homees15 user.info monit[992]: 'ReGaHss' process is running after previous restart timeout (manually recovered?)
Jan  1 23:35:47 homees15 user.info monit[992]: 'ReGaHss' connection succeeded to [localhost]:8183 [TCP/IP]
Jan  1 23:37:44 homees15 auth.info sshd[8048]: Accepted password for root from 192.168.0.114 port 63842 ssh2
Jan  1 23:41:16 homees15 user.err monit[992]: 'ReGaHss' failed protocol test [HTTP] at [localhost]:8183 [TCP/IP] -- HTTP: Error receiving data -- Resource temporarily unavailable
Jan  1 23:41:16 homees15 user.info monit[992]: 'ReGaHss' trying to restart
Jan  1 23:41:16 homees15 user.info monit[992]: 'ReGaHss' restart: '/etc/init.d/S70ReGaHss restart'
Jan  1 23:41:18 homees15 user.info ReGaHss: SIGTERM: ReGaHss Halting
Jan  1 23:41:43 homees15 user.err monit[992]: 'ReGaHss' service restarted 1 times within 1 cycles(s) - exec
Jan  1 23:41:43 homees15 user.info monit[992]: 'ReGaHss' exec: '/bin/triggerAlarm.tcl ReGaHss restarted WatchDog-Alarm'
Jan  1 23:41:43 homees15 user.info monit[992]: 'ReGaHss' process is running after previous restart timeout (manually recovered?)
Jan  1 23:42:01 homees15 user.info monit[992]: 'ReGaHss' connection succeeded to [localhost]:8183 [TCP/IP]
Jan  1 23:43:12 homees15 auth.info sshd[8810]: Accepted password for root from 192.168.0.114 port 63933 ssh2
Jan  1 23:44:13 homees15 user.err monit[992]: 'ReGaHss' failed protocol test [HTTP] at [localhost]:8183 [TCP/IP] -- HTTP: Error receiving data -- Resource temporarily unavailable
Jan  1 23:44:13 homees15 user.info monit[992]: 'ReGaHss' trying to restart
Jan  1 23:44:13 homees15 user.info monit[992]: 'ReGaHss' restart: '/etc/init.d/S70ReGaHss restart'
Jan  1 23:44:16 homees15 user.info ReGaHss: SIGTERM: ReGaHss Halting
Jan  1 23:44:41 homees15 user.err monit[992]: 'ReGaHss' service restarted 1 times within 1 cycles(s) - exec
Jan  1 23:44:41 homees15 user.info monit[992]: 'ReGaHss' exec: '/bin/triggerAlarm.tcl ReGaHss restarted WatchDog-Alarm'
Jan  1 23:44:41 homees15 user.info monit[992]: 'ReGaHss' process is running after previous restart timeout (manually recovered?)
Jan  1 23:44:59 homees15 user.info monit[992]: 'ReGaHss' connection succeeded to [localhost]:8183 [TCP/IP]
Jan  1 23:50:08 homees15 daemon.info cuxd[669]: CUx-Daemon restart(1)
Jan  1 23:50:08 homees15 daemon.info cuxd[669]: remove(/var/cache/cuxd_proxy.ini)
Jan  1 23:50:08 homees15 daemon.info cuxd[669]: save paramsets(/usr/local/addons/cuxd/cuxd.ps) size:1235
Jan  1 23:50:08 homees15 daemon.info cuxd[9722]: write_pid /var/run/cuxd.pid [9722]
Jan  1 23:50:08 homees15 daemon.info cuxd[9722]: CUx-Daemon(2.3.4) on CCU(3.47.22.20191130) start PID:9722
Jan  1 23:50:08 homees15 daemon.info cuxd[9722]: load paramsets(/usr/local/addons/cuxd/cuxd.ps) size:1235 update(0s):Wed Jan  1 23:50:08 2020
Jan  1 23:50:08 homees15 daemon.info cuxd[9722]: 2 device-paramset(s) loaded ok!
Jan  1 23:50:08 homees15 daemon.info cuxd[9722]: write_proxy /var/cache/cuxd_proxy.ini (9722 /usr/local/addons/cuxd/ 2.3.4 3.47.22.20191130 0)
Jan  1 23:51:58 homees15 user.err monit[992]: 'ReGaHss' failed protocol test [HTTP] at [localhost]:8183 [TCP/IP] -- HTTP: Error receiving data -- Resource temporarily unavailable
Jan  1 23:51:58 homees15 user.info monit[992]: 'ReGaHss' trying to restart
Jan  1 23:51:58 homees15 user.info monit[992]: 'ReGaHss' restart: '/etc/init.d/S70ReGaHss restart'
Jan  1 23:52:00 homees15 user.info ReGaHss: SIGTERM: ReGaHss Halting
Jan  1 23:52:25 homees15 user.err monit[992]: 'ReGaHss' service restarted 1 times within 1 cycles(s) - exec
Jan  1 23:52:25 homees15 user.info monit[992]: 'ReGaHss' exec: '/bin/triggerAlarm.tcl ReGaHss restarted WatchDog-Alarm'
Jan  1 23:52:25 homees15 user.info monit[992]: 'ReGaHss' process is running after previous restart timeout (manually recovered?)
Jan  1 23:52:46 homees15 local0.err ReGaHss: ERROR: XmlRpc: Error in XmlRpcClient::writeRequest: write error (error 110). [error():iseXmlRpc.h:281]
Jan  1 23:52:46 homees15 local0.warn ReGaHss: WARNING: XMLRPC 'getValue': rpcClient.execute() failed (url: xmlrpc_bin://127.0.0.1:32001, params: {"MEQ0485236:1","STATE"}, result: nil) [CallXmlrpcMethod():iseXmlRpc.cpp:2602]
Jan  1 23:52:46 homees15 local0.err ReGaHss: ERROR: XMLRPC 'getValue' call failed (interface: 1007, params: {"MEQ0485236:1","STATE"}) [CallGetValue():iseXmlRpc.cpp:1435]
Jan  1 23:52:46 homees15 local0.err ReGaHss: ERROR: XmlRpc: Error in XmlRpcClient::writeRequest: write error (error 110). [error():iseXmlRpc.h:281]
Jan  1 23:52:46 homees15 local0.err ReGaHss: ERROR: CallGetValue failed; sVal = 0 [ReadValue():iseDOMdpHSS.cpp:124]
Jan  1 23:52:46 homees15 local0.warn ReGaHss: WARNING: XMLRPC 'getValue': rpcClient.execute() failed (url: xmlrpc_bin://127.0.0.1:32001, params: {"LEQ0980766:1","STATE"}, result: nil) [CallXmlrpcMethod():iseXmlRpc.cpp:2602]
Jan  1 23:52:46 homees15 local0.err ReGaHss: ERROR: XMLRPC 'getValue' call failed (interface: 1007, params: {"LEQ0980766:1","STATE"}) [CallGetValue():iseXmlRpc.cpp:1435]
Jan  1 23:52:46 homees15 local0.err ReGaHss: ERROR: CallGetValue failed; sVal = 0 [ReadValue():iseDOMdpHSS.cpp:124]
Jan  1 23:52:46 homees15 local0.err ReGaHss: ERROR: XmlRpc: Error in XmlRpcClient::writeRequest: write error (error 110). [error():iseXmlRpc.h:281]
Jan  1 23:52:46 homees15 local0.warn ReGaHss: WARNING: XMLRPC 'init': rpcClient.execute() failed (url: xmlrpc_bin://127.0.0.1:32001, params: {"xmlrpc_bin://127.0.0.1:31999","1007"}, result: nil) [CallXmlrpcMethod():iseXmlRpc.cpp:2602]
Jan  1 23:52:46 homees15 local0.err ReGaHss: ERROR: XMLRPC 'init' call failed (interface: 1007, params: {"xmlrpc_bin://127.0.0.1:31999","1007"}) [CallInit():iseXmlRpc.cpp:1204]
Jan  1 23:52:46 homees15 local0.err ReGaHss: ERROR: failed CallInit() for interface=BidCos-RF [ThreadFunction():iseRTHss.cpp:163]
Jan  1 23:52:48 homees15 user.info monit[992]: 'ReGaHss' connection succeeded to [localhost]:8183 [TCP/IP]
Jan  1 23:52:56 homees15 daemon.info cuxd[9722]: INIT 'xmlrpc_bin://127.0.0.1:31999' '1618'
Jan  1 23:53:18 homees15 local0.err ReGaHss: ERROR: XmlRpc: Error in XmlRpcClient::writeRequest: write error (error 110). [error():iseXmlRpc.h:281]
Jan  1 23:53:18 homees15 local0.warn ReGaHss: WARNING: XMLRPC 'getValue': rpcClient.execute() failed (url: xmlrpc_bin://127.0.0.1:32001, params: {"LEQ0980766:1","LOWBAT"}, result: nil) [CallXmlrpcMethod():iseXmlRpc.cpp:2602]
Jan  1 23:53:18 homees15 local0.warn ReGaHss: WARNING: XMLRPC 'getValue': rpcClient.execute() failed (url: xmlrpc_bin://127.0.0.1:32001, params: {"MEQ0670190:3","MOTION"}, result: nil) [CallXmlrpcMethod():iseXmlRpc.cpp:2602]
Jan  1 23:53:18 homees15 local0.err ReGaHss: ERROR: XMLRPC 'getValue' call failed (interface: 1007, params: {"LEQ0980766:1","LOWBAT"}) [CallGetValue():iseXmlRpc.cpp:1435]
Jan  1 23:53:18 homees15 local0.err ReGaHss: ERROR: XMLRPC 'getValue' call failed (interface: 1007, params: {"MEQ0670190:3","MOTION"}) [CallGetValue():iseXmlRpc.cpp:1435]
Jan  1 23:53:18 homees15 local0.err ReGaHss: ERROR: CallGetValue failed; sVal = 0 [ReadValue():iseDOMdpHSS.cpp:124]
Jan  1 23:53:49 homees15 local0.err ReGaHss: ERROR: XmlRpc: Error in XmlRpcClient::writeRequest: write error (error 110). [error():iseXmlRpc.h:281]
Jan  1 23:53:49 homees15 local0.warn ReGaHss: WARNING: XMLRPC 'getValue': rpcClient.execute() failed (url: xmlrpc_bin://127.0.0.1:32001, params: {"LEQ0980766:2","STATE"}, result: nil) [CallXmlrpcMethod():iseXmlRpc.cpp:2602]
Jan  1 23:53:49 homees15 local0.warn ReGaHss: WARNING: XMLRPC 'getValue': rpcClient.execute() failed (url: xmlrpc_bin://127.0.0.1:32001, params: {"LEQ0810833:1","TEMPERATURE"}, result: nil) [CallXmlrpcMethod():iseXmlRpc.cpp:2602]
Jan  1 23:53:49 homees15 local0.err ReGaHss: ERROR: XMLRPC 'getValue' call failed (interface: 1007, params: {"LEQ0980766:2","STATE"}) [CallGetValue():iseXmlRpc.cpp:1435]
Jan  1 23:53:49 homees15 local0.err ReGaHss: ERROR: XMLRPC 'getValue' call failed (interface: 1007, params: {"LEQ0810833:1","TEMPERATURE"}) [CallGetValue():iseXmlRpc.cpp:1435]
Jan  1 23:53:49 homees15 local0.err ReGaHss: ERROR: CallGetValue failed; sVal = 0 [ReadValue():iseDOMdpHSS.cpp:124]
Jan  1 23:53:49 homees15 local0.err ReGaHss: ERROR: CallGetValue failed; sVal = 0.000000 [ReadValue():iseDOMdpHSS.cpp:124]
Jan  1 23:54:02 homees15 user.err monit[992]: 'rfd' failed protocol test [DEFAULT] at [localhost]:32001 [TCP/IP] -- Connection timed out
Jan  1 23:54:02 homees15 user.info monit[992]: 'rfd' trying to restart
Jan  1 23:54:02 homees15 user.info monit[992]: 'rfd' restart: '/etc/init.d/S61rfd restart'
Jan  1 23:54:02 homees15 user.info kernel: [21140.615780] eq3loop: eq3loop_close_slave() mmd_bidcos
Jan  1 23:54:02 homees15 daemon.warn cuxd[9722]: rpc_handler() - len(-1) read error!
Jan  1 23:54:02 homees15 user.info kernel: [21140.670478] eq3loop: eq3loop_open_slave() mmd_bidcos
Jan  1 23:54:04 homees15 user.err rfd: HSSParameter::GetValue() id=BRIGHTNESS failed getting physical value.
Jan  1 23:54:04 homees15 local0.warn ReGaHss: WARNING: XMLRPC 'getValue': rpcClient.isFault() failed (url: xmlrpc_bin://127.0.0.1:32001, params: {"MEQ0248436:1","BRIGHTNESS"}, result: [faultCode:-1,faultString:"Failure"]) [CallXmlrpcMethod():iseXmlRpc.cpp:2608]
Jan  1 23:54:04 homees15 local0.err ReGaHss: ERROR: XMLRPC 'getValue' call failed (interface: 1007, params: {"MEQ0248436:1","BRIGHTNESS"}) [CallGetValue():iseXmlRpc.cpp:1435]
Jan  1 23:54:04 homees15 local0.err ReGaHss: ERROR: CallGetValue failed; sVal = 0 [ReadValue():iseDOMdpHSS.cpp:124]
Jan  1 23:54:04 homees15 user.err rfd: HSSParameter::GetValue() id=BRIGHTNESS failed getting physical value.
Jan  1 23:54:04 homees15 local0.warn ReGaHss: WARNING: XMLRPC 'getValue': rpcClient.isFault() failed (url: xmlrpc_bin://127.0.0.1:32001, params: {"MEQ0092985:3","BRIGHTNESS"}, result: [faultCode:-1,faultString:"Failure"]) [CallXmlrpcMethod():iseXmlRpc.cpp:2608]
Jan  1 23:54:04 homees15 local0.err ReGaHss: ERROR: XMLRPC 'getValue' call failed (interface: 1007, params: {"MEQ0092985:3","BRIGHTNESS"}) [CallGetValue():iseXmlRpc.cpp:1435]
Jan  1 23:54:04 homees15 local0.err ReGaHss: ERROR: CallGetValue failed; sVal = 0 [ReadValue():iseDOMdpHSS.cpp:124]
Jan  1 23:54:04 homees15 user.err rfd: HSSParameter::GetValue() id=BRIGHTNESS failed getting physical value.
Jan  1 23:54:04 homees15 local0.warn ReGaHss: WARNING: XMLRPC 'getValue': rpcClient.isFault() failed (url: xmlrpc_bin://127.0.0.1:32001, params: {"MEQ0670190:3","BRIGHTNESS"}, result: [faultCode:-1,faultString:"Failure"]) [CallXmlrpcMethod():iseXmlRpc.cpp:2608]
Jan  1 23:54:04 homees15 local0.err ReGaHss: ERROR: XMLRPC 'getValue' call failed (interface: 1007, params: {"MEQ0670190:3","BRIGHTNESS"}) [CallGetValue():iseXmlRpc.cpp:1435]
Jan  1 23:54:04 homees15 local0.err ReGaHss: ERROR: CallGetValue failed; sVal = 0 [ReadValue():iseDOMdpHSS.cpp:124]
Jan  1 23:54:04 homees15 user.err rfd: HSSParameter::GetValue() id=BRIGHTNESS failed getting physical value.
Jan  1 23:54:04 homees15 local0.warn ReGaHss: WARNING: XMLRPC 'getValue': rpcClient.isFault() failed (url: xmlrpc_bin://127.0.0.1:32001, params: {"LEQ0030501:1","BRIGHTNESS"}, result: [faultCode:-1,faultString:"Failure"]) [CallXmlrpcMethod():iseXmlRpc.cpp:2608]
Jan  1 23:54:04 homees15 local0.err ReGaHss: ERROR: XMLRPC 'getValue' call failed (interface: 1007, params: {"LEQ0030501:1","BRIGHTNESS"}) [CallGetValue():iseXmlRpc.cpp:1435]
Jan  1 23:54:04 homees15 local0.err ReGaHss: ERROR: CallGetValue failed; sVal = 0 [ReadValue():iseDOMdpHSS.cpp:124]
Jan  1 23:54:05 homees15 user.err rfd: HSSParameter::GetValue() id=TEMPERATURE failed getting physical value.
Jan  1 23:54:05 homees15 local0.warn ReGaHss: WARNING: XMLRPC 'getValue': rpcClient.isFault() failed (url: xmlrpc_bin://127.0.0.1:32001, params: {"MEQ0204173:1","TEMPERATURE"}, result: [faultCode:-1,faultString:"Failure"]) [CallXmlrpcMethod():iseXmlRpc.cpp:2608]
Jan  1 23:54:05 homees15 local0.err ReGaHss: ERROR: XMLRPC 'getValue' call failed (interface: 1007, params: {"MEQ0204173:1","TEMPERATURE"}) [CallGetValue():iseXmlRpc.cpp:1435]
Jan  1 23:54:05 homees15 local0.err ReGaHss: ERROR: CallGetValue failed; sVal = 0.000000 [ReadValue():iseDOMdpHSS.cpp:124]
Jan  1 23:54:05 homees15 user.err rfd: HSSParameter::GetValue() id=TEMPERATURE failed getting physical value.
Jan  1 23:54:05 homees15 local0.warn ReGaHss: WARNING: XMLRPC 'getValue': rpcClient.isFault() failed (url: xmlrpc_bin://127.0.0.1:32001, params: {"MEQ0204173:2","TEMPERATURE"}, result: [faultCode:-1,faultString:"Failure"]) [CallXmlrpcMethod():iseXmlRpc.cpp:2608]
Jan  1 23:54:05 homees15 local0.err ReGaHss: ERROR: XMLRPC 'getValue' call failed (interface: 1007, params: {"MEQ0204173:2","TEMPERATURE"}) [CallGetValue():iseXmlRpc.cpp:1435]
Jan  1 23:54:05 homees15 local0.err ReGaHss: ERROR: CallGetValue failed; sVal = 0.000000 [ReadValue():iseDOMdpHSS.cpp:124]
Jan  1 23:54:05 homees15 user.err rfd: HSSParameter::GetValue() id=TEMPERATURE failed getting physical value.
Jan  1 23:54:05 homees15 local0.warn ReGaHss: WARNING: XMLRPC 'getValue': rpcClient.isFault() failed (url: xmlrpc_bin://127.0.0.1:32001, params: {"MEQ0204173:3","TEMPERATURE"}, result: [faultCode:-1,faultString:"Failure"]) [CallXmlrpcMethod():iseXmlRpc.cpp:2608]
Jan  1 23:54:05 homees15 local0.err ReGaHss: ERROR: XMLRPC 'getValue' call failed (interface: 1007, params: {"MEQ0204173:3","TEMPERATURE"}) [CallGetValue():iseXmlRpc.cpp:1435]
Jan  1 23:54:05 homees15 local0.err ReGaHss: ERROR: CallGetValue failed; sVal = 0.000000 [ReadValue():iseDOMdpHSS.cpp:124]
Jan  1 23:54:05 homees15 user.err rfd: HSSParameter::GetValue() id=TEMPERATURE failed getting physical value.
Jan  1 23:54:05 homees15 local0.warn ReGaHss: WARNING: XMLRPC 'getValue': rpcClient.isFault() failed (url: xmlrpc_bin://127.0.0.1:32001, params: {"MEQ0204173:4","TEMPERATURE"}, result: [faultCode:-1,faultString:"Failure"]) [CallXmlrpcMethod():iseXmlRpc.cpp:2608]
Jan  1 23:54:05 homees15 local0.err ReGaHss: ERROR: XMLRPC 'getValue' call failed (interface: 1007, params: {"MEQ0204173:4","TEMPERATURE"}) [CallGetValue():iseXmlRpc.cpp:1435]
Jan  1 23:54:05 homees15 local0.err ReGaHss: ERROR: CallGetValue failed; sVal = 0.000000 [ReadValue():iseDOMdpHSS.cpp:124]
Jan  1 23:54:05 homees15 user.err rfd: HSSParameter::GetValue() id=TEMPERATURE failed getting physical value.
Jan  1 23:54:05 homees15 local0.warn ReGaHss: WARNING: XMLRPC 'getValue': rpcClient.isFault() failed (url: xmlrpc_bin://127.0.0.1:32001, params: {"MEQ0204173:5","TEMPERATURE"}, result: [faultCode:-1,faultString:"Failure"]) [CallXmlrpcMethod():iseXmlRpc.cpp:2608]
Jan  1 23:54:05 homees15 local0.err ReGaHss: ERROR: XMLRPC 'getValue' call failed (interface: 1007, params: {"MEQ0204173:5","TEMPERATURE"}) [CallGetValue():iseXmlRpc.cpp:1435]
Jan  1 23:54:05 homees15 local0.err ReGaHss: ERROR: CallGetValue failed; sVal = 0.000000 [ReadValue():iseDOMdpHSS.cpp:124]
Jan  1 23:54:05 homees15 user.err rfd: HSSParameter::GetValue() id=DECISION_VALUE failed getting physical value.
Jan  1 23:54:05 homees15 local0.warn ReGaHss: WARNING: XMLRPC 'getValue': rpcClient.isFault() failed (url: xmlrpc_bin://127.0.0.1:32001, params: {"MEQ1838885:7","DECISION_VALUE"}, result: [faultCode:-1,faultString:"Failure"]) [CallXmlrpcMethod():iseXmlRpc.cpp:2608]
Jan  1 23:54:05 homees15 local0.err ReGaHss: ERROR: XMLRPC 'getValue' call failed (interface: 1007, params: {"MEQ1838885:7","DECISION_VALUE"}) [CallGetValue():iseXmlRpc.cpp:1435]
Jan  1 23:54:05 homees15 local0.err ReGaHss: ERROR: CallGetValue failed; sVal = 0 [ReadValue():iseDOMdpHSS.cpp:124]
Jan  1 23:54:05 homees15 user.err rfd: HSSParameter::GetValue() id=DECISION_VALUE failed getting physical value.
Jan  1 23:54:05 homees15 local0.warn ReGaHss: WARNING: XMLRPC 'getValue': rpcClient.isFault() failed (url: xmlrpc_bin://127.0.0.1:32001, params: {"MEQ0181178:7","DECISION_VALUE"}, result: [faultCode:-1,faultString:"Failure"]) [CallXmlrpcMethod():iseXmlRpc.cpp:2608]
Jan  1 23:54:05 homees15 local0.err ReGaHss: ERROR: XMLRPC 'getValue' call failed (interface: 1007, params: {"MEQ0181178:7","DECISION_VALUE"}) [CallGetValue():iseXmlRpc.cpp:1435]
Jan  1 23:54:05 homees15 local0.err ReGaHss: ERROR: CallGetValue failed; sVal = 0 [ReadValue():iseDOMdpHSS.cpp:124]
Jan  1 23:54:24 homees15 user.err monit[992]: 'rfd' service restarted 1 times within 1 cycles(s) - exec
Jan  1 23:54:24 homees15 user.info monit[992]: 'rfd' exec: '/bin/triggerAlarm.tcl rfd restarted WatchDog-Alarm'
Jan  1 23:54:24 homees15 user.info monit[992]: 'rfd' process is running after previous restart timeout (manually recovered?)
Jan  1 23:54:42 homees15 user.info monit[992]: 'rfd' connection succeeded to [localhost]:32001 [TCP/IP]
Jan  2 00:00:00 homees15 daemon.info cuxd[9722]: rename '/tmp/logging_fuer_uebertrag_cuxd.log' -> '/tmp/logging_fuer_uebertrag_cuxd.log.0'
Jan  2 00:00:00 homees15 daemon.warn cuxd[11008]: directory '/usr/local/devlog/' not found!
Jan  2 00:01:03 homees15 auth.info sshd[11148]: Accepted password for root from 192.168.0.114 port 64123 ssh2
Lighttpd-Error zum Aufhänge-Zeitpunkt:

Code: Alles auswählen

2020-01-01 22:26:04: (gw_backend.c.240) establishing connection failed: Connection refused socket: tcp:127.0.0.1:32000 
2020-01-01 22:26:04: (gw_backend.c.956) all handlers for /? on  are down. 
2020-01-01 22:26:06: (gw_backend.c.319) gw-server re-enabled: tcp:127.0.0.1:32000 127.0.0.1 32000  
2020-01-01 22:56:46: (gw_backend.c.240) establishing connection failed: Connection refused socket: tcp:127.0.0.1:32000 
2020-01-01 22:56:46: (gw_backend.c.956) all handlers for /? on  are down. 
2020-01-01 22:56:48: (gw_backend.c.319) gw-server re-enabled: tcp:127.0.0.1:32000 127.0.0.1 32000  
2020-01-01 23:27:04: (http-header-glue.c.1250) read(): Connection reset by peer 37 38 
2020-01-01 23:27:04: (gw_backend.c.2149) response not received, request sent: 432 on socket: tcp:127.0.0.1:32001 for /?, closing connection 
2020-01-01 23:27:05: (gw_backend.c.240) establishing connection failed: Connection refused socket: tcp:127.0.0.1:32000 
2020-01-01 23:27:05: (gw_backend.c.956) all handlers for /? on  are down. 
2020-01-01 23:27:07: (gw_backend.c.319) gw-server re-enabled: tcp:127.0.0.1:32000 127.0.0.1 32000  
2020-01-01 23:35:11: (gw_backend.c.240) establishing connection failed: Connection refused socket: tcp:127.0.0.1:8183 
2020-01-01 23:35:11: (gw_backend.c.956) all handlers for /esp/system.htm?sid=@1f45kt4qWg@&action=UpdateUI on  are down. 
2020-01-01 23:35:13: (gw_backend.c.319) gw-server re-enabled: tcp:127.0.0.1:8183 127.0.0.1 8183  
2020-01-01 23:41:24: (gw_backend.c.240) establishing connection failed: Connection refused socket: tcp:127.0.0.1:8183 
2020-01-01 23:41:24: (gw_backend.c.956) all handlers for /esp/system.htm?sid=@VC7B61WIfL@&action=UpdateUI on  are down. 
2020-01-01 23:41:26: (gw_backend.c.319) gw-server re-enabled: tcp:127.0.0.1:8183 127.0.0.1 8183  
2020-01-01 23:44:22: (gw_backend.c.240) establishing connection failed: Connection refused socket: tcp:127.0.0.1:8183 
2020-01-01 23:44:22: (gw_backend.c.956) all handlers for /esp/system.htm?sid=@EDh1sHpu4f@&action=UpdateUI on  are down. 
2020-01-01 23:44:24: (gw_backend.c.319) gw-server re-enabled: tcp:127.0.0.1:8183 127.0.0.1 8183  
2020-01-01 23:52:06: (gw_backend.c.240) establishing connection failed: Connection refused socket: tcp:127.0.0.1:8183 
2020-01-01 23:52:06: (gw_backend.c.956) all handlers for /esp/system.htm?sid=@ZczSlYjEe3@&action=UpdateUI on  are down. 
2020-01-01 23:52:08: (gw_backend.c.319) gw-server re-enabled: tcp:127.0.0.1:8183 127.0.0.1 8183  
2020-01-01 23:54:02: (http-header-glue.c.1250) read(): Connection reset by peer 39 45 
2020-01-01 23:54:02: (gw_backend.c.2149) response not received, request sent: 432 on socket: tcp:127.0.0.1:32001 for /?, closing connection 
2020-01-01 23:54:02: (gw_backend.c.240) establishing connection failed: Connection refused socket: tcp:127.0.0.1:32000 
2020-01-01 23:54:02: (gw_backend.c.956) all handlers for /? on  are down. 
2020-01-01 23:54:04: (gw_backend.c.319) gw-server re-enabled: tcp:127.0.0.1:32000 127.0.0.1 32000  
2020-01-01 23:54:07: (gw_backend.c.240) establishing connection failed: Connection refused socket: tcp:127.0.0.1:32000 
2020-01-01 23:54:07: (gw_backend.c.956) all handlers for /? on  are down. 
2020-01-01 23:54:09: (gw_backend.c.319) gw-server re-enabled: tcp:127.0.0.1:32000 127.0.0.1 32000  
2020-01-01 23:56:25: (gw_backend.c.240) establishing connection failed: Connection refused socket: tcp:127.0.0.1:32000 
2020-01-01 23:56:25: (gw_backend.c.956) all handlers for /? on  are down. 
2020-01-01 23:56:27: (gw_backend.c.319) gw-server re-enabled: tcp:127.0.0.1:32000 127.0.0.1 32000  
2020-01-01 23:56:40: (gw_backend.c.240) establishing connection failed: Connection refused socket: tcp:127.0.0.1:32000 
2020-01-01 23:56:40: (gw_backend.c.956) all handlers for /? on  are down. 
2020-01-01 23:56:42: (gw_backend.c.319) gw-server re-enabled: tcp:127.0.0.1:32000 127.0.0.1 32000  
2020-01-01 23:59:47: (gw_backend.c.240) establishing connection failed: Connection refused socket: tcp:127.0.0.1:32000 
2020-01-01 23:59:47: (gw_backend.c.956) all handlers for /? on  are down. 
2020-01-01 23:59:49: (gw_backend.c.319) gw-server re-enabled: tcp:127.0.0.1:32000 127.0.0.1 32000  
2020-01-02 00:00:00: (gw_backend.c.240) establishing connection failed: Connection refused socket: tcp:127.0.0.1:32000 
2020-01-02 00:00:00: (gw_backend.c.956) all handlers for /? on  are down. 
2020-01-02 00:00:02: (gw_backend.c.319) gw-server re-enabled: tcp:127.0.0.1:32000 127.0.0.1 32000  
2020-01-02 00:02:20: (gw_backend.c.240) establishing connection failed: Connection refused socket: tcp:127.0.0.1:32000 
2020-01-02 00:02:20: (gw_backend.c.956) all handlers for /? on  are down. 
2020-01-02 00:02:22: (gw_backend.c.319) gw-server re-enabled: tcp:127.0.0.1:32000 127.0.0.1 32000  
2020-01-02 00:03:22: (gw_backend.c.240) establishing connection failed: Connection refused socket: tcp:127.0.0.1:32000 
2020-01-02 00:03:22: (gw_backend.c.956) all handlers for /? on  are down. 
2020-01-02 00:03:24: (gw_backend.c.319) gw-server re-enabled: tcp:127.0.0.1:32000 127.0.0.1 32000  
Messages zum Aufhänge-Zeitpunkt:

Code: Alles auswählen

Jan  1 22:55:14 homees15 user.warn monit[992]: 'rfd' failed protocol test [DEFAULT] at [localhost]:32001 [TCP/IP] -- Connection timed out
Jan  1 22:55:37 homees15 user.warn monit[992]: 'rfd' failed protocol test [DEFAULT] at [localhost]:32001 [TCP/IP] -- Connection timed out
Jan  1 22:55:59 homees15 user.warn monit[992]: 'rfd' failed protocol test [DEFAULT] at [localhost]:32001 [TCP/IP] -- Connection timed out
Jan  1 22:56:22 homees15 user.warn monit[992]: 'rfd' failed protocol test [DEFAULT] at [localhost]:32001 [TCP/IP] -- Connection timed out
Jan  1 22:56:45 homees15 user.err monit[992]: 'rfd' failed protocol test [DEFAULT] at [localhost]:32001 [TCP/IP] -- Connection timed out
Jan  1 22:56:45 homees15 user.info monit[992]: 'rfd' trying to restart
Jan  1 22:56:45 homees15 user.info monit[992]: 'rfd' restart: '/etc/init.d/S61rfd restart'
Jan  1 22:56:45 homees15 user.info kernel: [17704.203676] eq3loop: eq3loop_close_slave() mmd_bidcos
Jan  1 22:56:45 homees15 user.info kernel: [17704.258311] eq3loop: eq3loop_open_slave() mmd_bidcos
Jan  1 22:57:08 homees15 user.err monit[992]: 'rfd' service restarted 1 times within 1 cycles(s) - exec
Jan  1 22:57:08 homees15 user.info monit[992]: 'rfd' exec: '/bin/triggerAlarm.tcl rfd restarted WatchDog-Alarm'
Jan  1 22:57:08 homees15 user.info monit[992]: 'rfd' process is running after previous restart timeout (manually recovered?)
Jan  1 22:57:25 homees15 user.info monit[992]: 'rfd' connection succeeded to [localhost]:32001 [TCP/IP]
Lighttpd-Acess zum Aufhänge-Zeitpunkt ("21418" ist eine Systemvariable, die den Ladezustand eines Wandtablets fortschreibt):

Code: Alles auswählen

192.168.0.31 192.168.0.30:2001 - [01/Jan/2020:22:26:04 +0100] "POST / HTTP/1.1" 200 132 "-" "-"
192.168.0.31 192.168.0.30:2000 - [01/Jan/2020:22:26:04 +0100] "POST / HTTP/1.1" 503 2900 "-" "-"
192.168.0.31 192.168.0.30 - [01/Jan/2020:22:26:04 +0100] "GET /addons/xmlapi/programlist.cgi HTTP/1.1" 200 7248 "-" __TABLET HIER GENANNT__
192.168.0.31 192.168.0.30 - [01/Jan/2020:22:26:04 +0100] "GET /addons/xmlapi/version.cgi HTTP/1.1" 200 78 "-" __TABLET HIER GENANNT__
192.168.0.31 192.168.0.30 - [01/Jan/2020:22:26:04 +0100] "GET /addons/xmlapi/programlist.cgi HTTP/1.1" 200 7256 "-" __TABLET HIER GENANNT__
192.168.0.31 192.168.0.30 - [01/Jan/2020:22:26:04 +0100] "GET /addons/xmlapi/sysvarlist.cgi HTTP/1.1" 200 5309 "-" __TABLET HIER GENANNT__
192.168.0.31 192.168.0.30 - [01/Jan/2020:22:26:05 +0100] "GET /addons/xmlapi/statelist.cgi HTTP/1.1" 200 263008 "-" __TABLET HIER GENANNT__
192.168.0.31 192.168.0.30 - [01/Jan/2020:22:26:07 +0100] "GET /addons/xmlapi/favoritelist.cgi HTTP/1.1" 200 1107 "-" __TABLET HIER GENANNT__
192.168.0.31 192.168.0.30 - [01/Jan/2020:22:26:07 +0100] "GET /addons/xmlapi/systemNotification.cgi HTTP/1.1" 200 102 "-" __TABLET HIER GENANNT__
192.168.0.31 192.168.0.30 - [01/Jan/2020:22:29:59 +0100] "GET /config/xmlapi/statechange.cgi?ise_id=21418&new_value=100 HTTP/1.1" 200 116 "-" "Tasker/4.8u1m (Android/4.0.4)"
192.168.0.31 192.168.0.30 - [01/Jan/2020:22:39:59 +0100] "GET /config/xmlapi/statechange.cgi?ise_id=21418&new_value=100 HTTP/1.1" 200 116 "-" "Tasker/4.8u1m (Android/4.0.4)"
192.168.0.31 192.168.0.30 - [01/Jan/2020:22:49:59 +0100] "GET /config/xmlapi/statechange.cgi?ise_id=21418&new_value=100 HTTP/1.1" 200 116 "-" "Tasker/4.8u1m (Android/4.0.4)"
192.168.0.31 192.168.0.30:2001 - [01/Jan/2020:22:56:45 +0100] "POST / HTTP/1.1" 500 3231 "-" "-"
192.168.0.31 192.168.0.30:2000 - [01/Jan/2020:22:56:46 +0100] "POST / HTTP/1.1" 503 2900 "-" "-"
192.168.0.31 192.168.0.30:2001 - [01/Jan/2020:22:56:48 +0100] "POST / HTTP/1.1" 200 132 "-" "-"
192.168.0.31 192.168.0.30:2000 - [01/Jan/2020:22:56:48 +0100] "POST / HTTP/1.1" 503 2900 "-" "-"
192.168.0.31 192.168.0.30 - [01/Jan/2020:22:56:48 +0100] "GET /addons/xmlapi/programlist.cgi HTTP/1.1" 200 7256 "-" __TABLET HIER GENANNT__
192.168.0.31 192.168.0.30 - [01/Jan/2020:22:56:48 +0100] "GET /addons/xmlapi/version.cgi HTTP/1.1" 200 78 "-" __TABLET HIER GENANNT__
192.168.0.31 192.168.0.30 - [01/Jan/2020:22:56:48 +0100] "GET /addons/xmlapi/sysvarlist.cgi HTTP/1.1" 200 5309 "-" __TABLET HIER GENANNT__
192.168.0.31 192.168.0.30 - [01/Jan/2020:22:56:48 +0100] "GET /addons/xmlapi/programlist.cgi HTTP/1.1" 200 7256 "-" __TABLET HIER GENANNT__
192.168.0.31 192.168.0.30 - [01/Jan/2020:22:56:49 +0100] "GET /addons/xmlapi/statelist.cgi HTTP/1.1" 200 263016 "-" __TABLET HIER GENANNT__
192.168.0.31 192.168.0.30 - [01/Jan/2020:22:56:51 +0100] "GET /addons/xmlapi/favoritelist.cgi HTTP/1.1" 200 1107 "-" __TABLET HIER GENANNT__
192.168.0.31 192.168.0.30 - [01/Jan/2020:22:56:51 +0100] "GET /addons/xmlapi/systemNotification.cgi HTTP/1.1" 200 102 "-" __TABLET HIER GENANNT__
192.168.0.31 192.168.0.30 - [01/Jan/2020:22:59:59 +0100] "GET /config/xmlapi/statechange.cgi?ise_id=21418&new_value=100 HTTP/1.1" 200 116 "-" "Tasker/4.8u1m (Android/4.0.4)"
Habt ihr eine Idee, was ich noch versuchen könnte bzw. sagt euch das Problem etwas?
Bitte lasst mich wissen, wenn ich noch etwas zur Analyse beitragen kann, z.B. durch bestimme Logfiles oder ähnliches.

Herzlichen Dank vorab für die großartige Unterstützung!

Grüße
Max
Zuletzt geändert von Roland M. am 02.01.2020, 00:43, insgesamt 1-mal geändert.
Grund: Thema verschoben

SKB
Beiträge: 73
Registriert: 28.11.2018, 11:51
Hat sich bedankt: 4 Mal

Re: Regelmäßiges "Aufhängen" der RaspberryMatic

Beitrag von SKB » 28.01.2020, 18:37

Hallo Max,
ich denke, ich habe irgendwie ähnliche Probleme.
Hast Du bei Dir schon eine Lösung gefunden?
... wer nicht mit der Zeit geht, geht mit der Zeit ...

Benutzeravatar
roe1974
Beiträge: 746
Registriert: 17.10.2017, 16:15
System: Alternative CCU (auf Basis OCCU)
Wohnort: Wien
Hat sich bedankt: 52 Mal
Danksagung erhalten: 13 Mal

Re: Regelmäßiges "Aufhängen" der RaspberryMatic

Beitrag von roe1974 » 29.01.2020, 08:28

Welche Ports hast du in der RM Firewall freigegeben/eingetragen ? (Du hast ja "Ports blockiert" eingestellt)

Ausserdem ist "192.168.0.0/120" wie von Dir geschrieben eine illegale Netmask !
Wenn es zB. der Bereich 192.168.0.1 - 192.168.0.254 sein soll dann 192.168.0.0/24

lg Richard

bdombrowsky
Beiträge: 10
Registriert: 23.12.2014, 18:47

Re: Regelmäßiges "Aufhängen" der RaspberryMatic

Beitrag von bdombrowsky » 24.04.2020, 06:18

Hallo,

gibt es hier Erkenntnisse?

Ich habe derzeit das gleiche Problem und konnte bisher keine Lösung finden, ich habe jetzt bereits einen komplett neuen Raspberry PI + neues Funkmodul genutzt - immer noch die Abstürze (Natürlich auch neu installiert mit einem Restore eines Backups).

Gruß
Benjamin

bdombrowsky
Beiträge: 10
Registriert: 23.12.2014, 18:47

Re: Regelmäßiges "Aufhängen" der RaspberryMatic

Beitrag von bdombrowsky » 27.04.2020, 19:15

Moin,

bei mir lag es offenbar an der Syslog Integration, seit ich diese deaktiviert habe, läuft meine Zentrale durch.

Gruß
Benjamin

bdombrowsky
Beiträge: 10
Registriert: 23.12.2014, 18:47

Re: Regelmäßiges "Aufhängen" der RaspberryMatic

Beitrag von bdombrowsky » 28.04.2020, 07:01

Moin,

leider doch wieder einen "Hänger" heute Nacht gehabt.

Das Problem wird langsam echt anstrengend, da dieses System doch einige Dinge bei mir steuert.

Hat sonst niemand das Problem?

VG
Benjamin

Benutzeravatar
jmaus
Beiträge: 9844
Registriert: 17.02.2015, 14:45
System: Alternative CCU (auf Basis OCCU)
Wohnort: Dresden
Hat sich bedankt: 462 Mal
Danksagung erhalten: 1863 Mal
Kontaktdaten:

Re: Regelmäßiges "Aufhängen" der RaspberryMatic

Beitrag von jmaus » 28.04.2020, 08:19

Dann schilder doch erst einmal was du einem „Hänger“ verstehst. Wie äußert sich dieser. Gibt es Systemmeldungen zum Zeitpunkt des Hängers? Wie steht es um die System-Logs? Was ist da drin zu finden?
RaspberryMatic 3.75.6.20240316 @ ProxmoxVE – ~200 Hm-RF/HmIP-RF/HmIPW Geräte + ioBroker + HomeAssistant – GitHub / Sponsors / PayPal / ☕️

oppey
Beiträge: 81
Registriert: 14.05.2020, 07:58
System: Alternative CCU (auf Basis OCCU)
Hat sich bedankt: 14 Mal

Re: Regelmäßiges "Aufhängen" der RaspberryMatic

Beitrag von oppey » 07.09.2020, 10:03

Hi,
auch bei mir seit einigen Tagen laufend restarts.

Code: Alles auswählen

***** messages *****
Sep  7 04:00:06 homematic-raspi user.err monit[1404]: 'hs485d' service restarted 1 times within 1 cycles(s) - exec
Sep  7 04:00:06 homematic-raspi user.info monit[1404]: 'hs485d' exec: '/bin/triggerAlarm.tcl hs485d restarted WatchDog-Alarm'
Sep  7 04:00:06 homematic-raspi user.info monit[1404]: 'hs485d' process is running after previous restart timeout (manually recovered?)
Sep  7 04:00:25 homematic-raspi user.err monit[1404]: 'hs485d' failed protocol test [DEFAULT] at [localhost]:32000 [TCP/IP] -- Connection refused
Sep  7 04:00:25 homematic-raspi user.info monit[1404]: 'hs485d' trying to restart
Sep  7 04:00:25 homematic-raspi user.info monit[1404]: 'hs485d' restart: '/etc/init.d/S60hs485d restart'
Sep  7 04:00:44 homematic-raspi user.err monit[1404]: 'hs485d' service restarted 1 times within 1 cycles(s) - exec
Sep  7 04:00:44 homematic-raspi user.info monit[1404]: 'hs485d' exec: '/bin/triggerAlarm.tcl hs485d restarted WatchDog-Alarm'
Sep  7 04:00:44 homematic-raspi user.info monit[1404]: 'hs485d' process is running after previous restart timeout (manually recovered?)
Sep  7 04:01:00 homematic-raspi user.info dutycycle: Wired-LGW-Status: false
Sep  7 04:01:03 homematic-raspi user.err monit[1404]: 'hs485d' failed protocol test [DEFAULT] at [localhost]:32000 [TCP/IP] -- Connection refused
Sep  7 04:01:03 homematic-raspi user.info monit[1404]: 'hs485d' trying to restart
Sep  7 04:01:03 homematic-raspi user.info monit[1404]: 'hs485d' restart: '/etc/init.d/S60hs485d restart'
Sep  7 04:01:23 homematic-raspi user.err monit[1404]: 'hs485d' service restarted 1 times within 1 cycles(s) - exec
Sep  7 04:01:23 homematic-raspi user.info monit[1404]: 'hs485d' exec: '/bin/triggerAlarm.tcl hs485d restarted WatchDog-Alarm'
Sep  7 04:01:23 homematic-raspi user.info monit[1404]: 'hs485d' process is running after previous restart timeout (manually recovered?)
Sep  7 04:01:42 homematic-raspi user.err monit[1404]: 'hs485d' failed protocol test [DEFAULT] at [localhost]:32000 [TCP/IP] -- Connection refused
Sep  7 04:01:42 homematic-raspi user.info monit[1404]: 'hs485d' trying to restart
Sep  7 04:01:42 homematic-raspi user.info monit[1404]: 'hs485d' restart: '/etc/init.d/S60hs485d restart'
Sep  7 04:02:00 homematic-raspi user.info dutycycle: Wired-LGW-Status: false
Sep  7 04:02:01 homematic-raspi user.err monit[1404]: 'hs485d' service restarted 1 times within 1 cycles(s) - exec
Sep  7 04:02:01 homematic-raspi user.info monit[1404]: 'hs485d' exec: '/bin/triggerAlarm.tcl hs485d restarted WatchDog-Alarm'
Sep  7 04:02:01 homematic-raspi user.info monit[1404]: 'hs485d' process is running after previous restart timeout (manually recovered?)
Sep  7 04:02:20 homematic-raspi user.err monit[1404]: 'hs485d' failed protocol test [DEFAULT] at [localhost]:32000 [TCP/IP] -- Connection refused
Bei mir läuft Raspmatic mit IP, HM wired, und HM IP wired.
Programme laufen leider nicht.

VG
Christian

SavageDeath
Beiträge: 15
Registriert: 01.01.2020, 23:37
Hat sich bedankt: 6 Mal

Re: Regelmäßiges "Aufhängen" der RaspberryMatic

Beitrag von SavageDeath » 15.09.2020, 23:46

Auch bei mir Hänger... Syslog kommt noch / muss es erstmals suchen :D

SavageDeath
Beiträge: 15
Registriert: 01.01.2020, 23:37
Hat sich bedankt: 6 Mal

Re: Regelmäßiges "Aufhängen" der RaspberryMatic

Beitrag von SavageDeath » 18.10.2020, 21:41

Hallo zusammen,

mir ist mittlerweile aufgefallen, dass der Ausfall meiner CCU immer Samstagnachts passiert - kann mir jemand helfen beim Fehlerlog, soweit ich mich eingelesen habe, wird das Protokoll bei einem Neustart gelöscht - wie kann ich dieses regelmäßig sichern und welches am besten?

Viele Grüße und danke für eure Mühe (!)
SavageDeath

Antworten

Zurück zu „RaspberryMatic“