# HG changeset patch # User Edouard Tisserant # Date 1575899622 -3600 # Node ID d9b5303d43dc9b43f0fb5189c0e5f9fdbccd4bd9 # Parent 3b99c908f43b82d315e7773c67bddc2961c39030 SVGHMI : had to move the problem of wkaing up python thread from plc thread to platform specific code. Since Xenomai's cobalt thread are definitely incompatible with normal posix python interpreter binary's thread, we must synchronize them with arcane rt_pipes (the only ones that really work cross domain) as already done in debug and python async eval blocks. diff -r 3b99c908f43b -r d9b5303d43dc svghmi/svghmi.c --- a/svghmi/svghmi.c Mon Dec 09 10:43:54 2019 +0100 +++ b/svghmi/svghmi.c Mon Dec 09 14:53:42 2019 +0100 @@ -189,8 +189,8 @@ return 0; } -static pthread_cond_t svghmi_send_WakeCond = PTHREAD_COND_INITIALIZER; -static pthread_mutex_t svghmi_send_WakeCondLock = PTHREAD_MUTEX_INITIALIZER; +void SVGHMI_SuspendFromPythonThread(void); +void SVGHMI_WakeupFromRTThread(void); static int continue_collect; @@ -199,20 +199,15 @@ bzero(rbuf,sizeof(rbuf)); bzero(wbuf,sizeof(wbuf)); - pthread_mutex_lock(&svghmi_send_WakeCondLock); continue_collect = 1; - pthread_cond_signal(&svghmi_send_WakeCond); - pthread_mutex_unlock(&svghmi_send_WakeCondLock); return 0; } void __cleanup_svghmi() { - pthread_mutex_lock(&svghmi_send_WakeCondLock); continue_collect = 0; - pthread_cond_signal(&svghmi_send_WakeCond); - pthread_mutex_unlock(&svghmi_send_WakeCondLock); + SVGHMI_WakeupFromRTThread(); } void __retrieve_svghmi() @@ -225,20 +220,16 @@ global_write_dirty = 0; traverse_hmi_tree(write_iterator); if(global_write_dirty) { - pthread_cond_signal(&svghmi_send_WakeCond); + SVGHMI_WakeupFromRTThread(); } } /* PYTHON CALLS */ int svghmi_send_collect(uint32_t *size, char **ptr){ - int do_collect; - pthread_mutex_lock(&svghmi_send_WakeCondLock); - pthread_cond_wait(&svghmi_send_WakeCond, &svghmi_send_WakeCondLock); - do_collect = continue_collect; - pthread_mutex_unlock(&svghmi_send_WakeCondLock); - - if(do_collect) { + SVGHMI_SuspendFromPythonThread(); + + if(continue_collect) { int res; sbufidx = HMI_HASH_SIZE; if((res = traverse_hmi_tree(send_iterator)) == 0) diff -r 3b99c908f43b -r d9b5303d43dc targets/Linux/plc_Linux_main.c --- a/targets/Linux/plc_Linux_main.c Mon Dec 09 10:43:54 2019 +0100 +++ b/targets/Linux/plc_Linux_main.c Mon Dec 09 14:53:42 2019 +0100 @@ -235,3 +235,18 @@ { pthread_mutex_lock(&python_mutex); } + +static pthread_cond_t svghmi_send_WakeCond = PTHREAD_COND_INITIALIZER; +static pthread_mutex_t svghmi_send_WakeCondLock = PTHREAD_MUTEX_INITIALIZER; + +void SVGHMI_SuspendFromPythonThread(void) +{ + pthread_mutex_lock(&svghmi_send_WakeCondLock); + pthread_cond_wait(&svghmi_send_WakeCond, &svghmi_send_WakeCondLock); + pthread_mutex_unlock(&svghmi_send_WakeCondLock); +} + +void SVGHMI_WakeupFromRTThread(void) +{ + pthread_cond_signal(&svghmi_send_WakeCond); +} diff -r 3b99c908f43b -r d9b5303d43dc targets/Xenomai/plc_Xenomai_main.c --- a/targets/Xenomai/plc_Xenomai_main.c Mon Dec 09 10:43:54 2019 +0100 +++ b/targets/Xenomai/plc_Xenomai_main.c Mon Dec 09 14:53:42 2019 +0100 @@ -26,6 +26,8 @@ #define PLC_STATE_WAITDEBUG_PIPE_CREATED 64 #define PLC_STATE_WAITPYTHON_FILE_OPENED 128 #define PLC_STATE_WAITPYTHON_PIPE_CREATED 256 +#define PLC_STATE_SVGHMI_FILE_OPENED 512 +#define PLC_STATE_SVGHMI_PIPE_CREATED 1024 #define WAITDEBUG_PIPE_DEVICE "/dev/rtp0" #define WAITDEBUG_PIPE_MINOR 0 @@ -35,6 +37,8 @@ #define WAITPYTHON_PIPE_MINOR 2 #define PYTHON_PIPE_DEVICE "/dev/rtp3" #define PYTHON_PIPE_MINOR 3 +#define SVGHMI_PIPE_DEVICE "/dev/rtp4" +#define SVGHMI_PIPE_MINOR 4 #define PIPE_SIZE 1 // rt-pipes commands @@ -68,10 +72,12 @@ RT_PIPE WaitPython_pipe; RT_PIPE Debug_pipe; RT_PIPE Python_pipe; +RT_PIPE svghmi_pipe; int WaitDebug_pipe_fd; int WaitPython_pipe_fd; int Debug_pipe_fd; int Python_pipe_fd; +int svghmi_pipe_fd; int PLC_shutdown = 0; @@ -114,6 +120,16 @@ PLC_state &= ~PLC_STATE_TASK_CREATED; } + if (PLC_state & PLC_STATE_SVGHMI_PIPE_CREATED) { + rt_pipe_delete(&svghmi_pipe); + PLC_state &= ~PLC_STATE_SVGHMI_PIPE_CREATED; + } + + if (PLC_state & PLC_STATE_SVGHMI_FILE_OPENED) { + close(svghmi_pipe_fd); + PLC_state &= ~PLC_STATE_SVGHMI_FILE_OPENED; + } + if (PLC_state & PLC_STATE_WAITDEBUG_PIPE_CREATED) { rt_pipe_delete(&WaitDebug_pipe); PLC_state &= ~PLC_STATE_WAITDEBUG_PIPE_CREATED; @@ -240,6 +256,16 @@ _startPLCLog(FO WAITPYTHON_PIPE_DEVICE); PLC_state |= PLC_STATE_WAITPYTHON_FILE_OPENED; + /* create svghmi_pipe */ + if(rt_pipe_create(&svghmi_pipe, "svghmi_pipe", SVGHMI_PIPE_MINOR, PIPE_SIZE) < 0) + _startPLCLog(FO "svghmi_pipe real-time end"); + PLC_state |= PLC_STATE_SVGHMI_PIPE_CREATED; + + /* open svghmi_pipe*/ + if((svghmi_pipe_fd = open(SVGHMI_PIPE_DEVICE, O_RDWR)) == -1) + _startPLCLog(FO SVGHMI_PIPE_DEVICE); + PLC_state |= PLC_STATE_SVGHMI_FILE_OPENED; + /*** create PLC task ***/ if(rt_task_create(&PLC_task, "PLC_task", 0, 50, T_JOINABLE)) _startPLCLog("Failed creating PLC task"); @@ -395,6 +421,18 @@ } /* as plc does not wait for lock. */ } +void SVGHMI_SuspendFromPythonThread(void) +{ + char cmd = 1; /*whatever*/ + read(svghmi_pipe_fd, &cmd, sizeof(cmd)); +} + +void SVGHMI_WakeupFromRTThread(void) +{ + char cmd; + rt_pipe_write(&svghmi_pipe, &cmd, sizeof(cmd), P_NORMAL); +} + #ifndef HAVE_RETAIN int CheckRetainBuffer(void) {