Chat freely about anything...

User avatar
By hreintke
#6600 LS,
I am using RTOS SDK and working with socket networking.
When setting the receive timeout I need to set it to a factor 1000 more to get the actual timeout.

Code: Select all      struct timeval tv;
      tv.tv_sec = 2000;
      tv.tv_usec = 0;

      if (setsockopt(local_server_sock, SOL_SOCKET, SO_RCVTIMEO,&tv,sizeof(tv)) < 0) {
          printf("ntp_time > error setting timeout");
      }

Will set the timeout value actually to two seconds.

- Does anyone else notice the same ?
Kind regards,
Herman
User avatar
By hreintke
#6658 Athena,
What really happens is that the timeout is actually two seconds when I set tv.tv_sec =2000.
When setting tv.tv_sec = 2, almost immediate return.
Looks like a bug in rtos sdk. If you want I can show debug output.
Herman