Problems playing ongoing recordings?

Message ID 47F16E41.7010301@o2.pl
State New
Headers

Commit Message

Artur Skawina March 31, 2008, 11:05 p.m. UTC
  Alexw wrote:
>>> Sometimes the player stops in the middle of a recording due to a zero size request. Here is the log:
>>>
>>> vdr: [3836] dvbplayer thread started (pid=3643, tid=3836)
>>> vdr: [3836] resuming replay at index 1950 (0:01:18.01)
>>> vdr: [3837] non blocking file reader thread started (pid=3643, tid=3837)
>>> vdr: [3836] SetBrokenLink: no GOP header found in video packet
>>> vdr: [3836] setting audio track to 1 (0)
>>> vdr: [3836] playing '/var/vdr/video/SERVER/recording/2008-03-28.18.58.50.50.rec/001.vdr'
>>>
>>> <<<unexpect stop of replay>>>
>>>
>>> vdr: [3837] non blocking file reader thread ended (pid=3643, tid=3837)
>>> vdr: [3836] dvbplayer thread ended (pid=3643, tid=3836)

>>> vdr: [5618] WANT: fd: 25 1068536495 .. 1068722913 SIZE: 186418
>>> vdr: [5618] READ: fd: 25 1068536495 .. 1068666704 SIZE: 130209 jump:         0 ra: 12582912
>>> vdr: [5618] WANT: fd: 25 1068666704 .. 1068983331 SIZE: 316627
>>> vdr: [5618] READ: fd: 25 1068666704 .. 1068680058 SIZE:  13354 jump:         0 ra: 12582912
>>> vdr: [5618] READ: fd: 25 1068680058 .. 1068690344 SIZE:  10286 jump:         0 ra: 12582912
>>> vdr: [5618] READ: fd: 25 1068690344 .. 1068721839 SIZE:  31495 jump:         0 ra: 12582912
>>> vdr: [5618] READ: fd: 25 1068721839 .. 1069246127 SIZE: 524288 jump:         0 ra: 12582912
>>> vdr: [5618] WANT: fd: 25 1069246127 .. 1070294703 SIZE: 1048576
>>> vdr: [5618] READ: fd: 25 1069246127 .. 1069246127 SIZE:      0 jump:         0 ra: 12582912
>>> vdr: [5618] non blocking file reader thread ended (pid=5563, tid=5618)
>>> vdr: [5617] dvbplayer thread ended (pid=5563, tid=5617)
>>>     
>> Weird, cUnbufferedFile::Read(Size=0). I'll try to reproduce this.
>>   
> Sometimes it take a long time to occur, sometimes not.

Did this start after applying my patch, or did it happen in the past too?
Does it always happen at a certain position? Specific stream or bitrate?
I don't recall ever having a similar problem, the number 524288 looks a bit
suspicious...


>>> As you can see the requested size is increasing until it reaches the max buf. This is also a period with freezes in the video (late delivery).
>>>     
>> Do these problems (0-sized reads) occur only near the end of a program being recorded?
>>   
> No, you can experience a stop in the middle of a recording.
> 
>> Also, I see from the above that the readahead code needs to be more aggressive:
>>
>>> vdr: [5627] WANT: fd: 25 1188531493 .. 1188861741 SIZE: 330248
>> [... small reads...]
>>> vdr: [5627] READ: fd: 25 1188616808 .. 1189141096 SIZE: 524288 jump:         0 ra: 12582912
>>>     
>> the readahead window does not cover the area which is being read later -- this certainly
>> is likely to stall playback. I'll fix this (i did not expect such a large difference in
>> read request sizes.)

The attached patch makes the readahead window grow much faster, this will cause more
I/O at the start of playback, but should handle cases like the one above better.
If it works correctly all the ranges mentioned in "READ:" lines should be inside
the preceding "WANT:" range and the playback shouldn't stall.
Here the readahead window grows to ~5Mbytes just after starting playback,
i still need to check that this is not too fast, doesn't saturate the disk and/or
link and cause delays when jumping etc. Tested by playing a few files from an
NFS mount, didn't notice any problems so far.

An incremental patch would look like this (the attached one (vs 1.4.7) already includes it):


artur
diff --git a/cutter.c b/cutter.c
index 5170ae4..7e2e506 100644
--- a/cutter.c
+++ b/cutter.c
@@ -66,7 +66,8 @@ void cCuttingThread::Action(void)
      toFile = toFileName->Open();
      if (!fromFile || !toFile)
         return;
-     fromFile->SetReadAhead(MEGABYTE(20));
+     fromFile->CuttingSrc();
+     toFile->CuttingDst();
      int Index = Mark->position;
      Mark = fromMarks.Next(Mark);
      int FileSize = 0;
@@ -91,7 +92,7 @@ void cCuttingThread::Action(void)
            if (fromIndex->Get(Index++, &FileNumber, &FileOffset, &PictureType, &Length)) {
               if (FileNumber != CurrentFileNumber) {
                  fromFile = fromFileName->SetOffset(FileNumber, FileOffset);
-                 fromFile->SetReadAhead(MEGABYTE(20));
+                 fromFile->CuttingSrc();
                  CurrentFileNumber = FileNumber;
                  }
               if (fromFile) {
@@ -124,6 +125,7 @@ void cCuttingThread::Action(void)
                     error = "toFile 1";
                     break;
                     }
+                 toFile->CuttingDst();
                  FileSize = 0;
                  }
               LastIFrame = 0;
@@ -164,6 +166,7 @@ void cCuttingThread::Action(void)
                        error = "toFile 2";
                        break;
                        }
+                    toFile->CuttingDst();
                     FileSize = 0;
                     }
                  }
diff --git a/tools.c b/tools.c
index 255c806..e22614f 100644
--- a/tools.c
+++ b/tools.c
@@ -1055,7 +1055,8 @@ bool cSafeFile::Close(void)
 
 #define USE_FADVISE
 
-#define WRITE_BUFFER KILOBYTE(800)
+//#define dfsyslog dsyslog			// uncomment to turn on fadvise related logging
+#define dfsyslog(a...) do {} while (0)
 
 cUnbufferedFile::cUnbufferedFile(void)
 {
@@ -1073,12 +1074,19 @@ int cUnbufferedFile::Open(const char *FileName, int Flags, mode_t Mode)
   fd = open(FileName, Flags, Mode);
   curpos = 0;
 #ifdef USE_FADVISE
-  begin = lastpos = ahead = 0;
-  cachedstart = 0;
-  cachedend = 0;
-  readahead = KILOBYTE(128);
+  lastpos = 0;
+  lastjump = 0;
+  cachedstart = cachedend = 0;
+  // 4M readahead seems to work for playback while cutting,
+  // but isn't quite enough for FF while cutting...
+  readahead = MEGABYTE(12);
+  writebuffer = KILOBYTE(2000);
   written = 0;
   totwritten = 0;
+  cutting = 0;
+  
+  wbuf = NULL;
+  
   if (fd >= 0)
      posix_fadvise(fd, 0, 0, POSIX_FADV_RANDOM); // we could use POSIX_FADV_SEQUENTIAL, but we do our own readahead, disabling the kernel one.
 #endif
@@ -1087,6 +1095,11 @@ int cUnbufferedFile::Open(const char *FileName, int Flags, mode_t Mode)
 
 int cUnbufferedFile::Close(void)
 {
+  if (fd >= 0 && wbuf) {
+     if (wbuf_len)
+        WriteBuf(wbuf, wbuf_len);
+     free(wbuf);
+     }
 #ifdef USE_FADVISE
   if (fd >= 0) {
      if (totwritten)    // if we wrote anything make sure the data has hit the disk before
@@ -1107,15 +1120,35 @@ int cUnbufferedFile::Close(void)
 #define FADVGRAN   KILOBYTE(4) // AKA fadvise-chunk-size; PAGE_SIZE or getpagesize(2) would also work.
 #define READCHUNK  MEGABYTE(8)
 
-void cUnbufferedFile::SetReadAhead(size_t ra)
+void cUnbufferedFile::CuttingSrc(void)
+{
+  readahead = MEGABYTE(20);
+  cutting = 1;
+}
+
+void cUnbufferedFile::CuttingDst(void)
 {
-  readahead = ra;
+  writebuffer = MEGABYTE(16);  // 4M is too little.
+  cutting = 2;
 }
 
 int cUnbufferedFile::FadviseDrop(off_t Offset, off_t Len)
 {
-  // rounding up the window to make sure that not PAGE_SIZE-aligned data gets freed.
-  return posix_fadvise(fd, Offset - (FADVGRAN - 1), Len + (FADVGRAN - 1) * 2, POSIX_FADV_DONTNEED);
+  // Round up the window to make sure that not PAGE_SIZE-aligned data gets freed.
+  // Note that that also means calling this with Len==0 isn't special (unlike fadvise).
+  dfsyslog("DROP: fd:%3d %9zd .. %9zd SIZE: %6zd", fd, Offset, Offset+Len, Len);
+  off_t prewin = min(Offset ,(off_t)FADVGRAN - 1); // must not wrap below 0.
+  return posix_fadvise(fd, Offset - prewin, Len + prewin + (FADVGRAN - 1), POSIX_FADV_DONTNEED);
+}
+
+// Trigger background readahead on the specified range and add
+// it to the "cached" area so that we can drop the data later.
+int cUnbufferedFile::FadviseRead(off_t Offset, off_t Len)
+{
+  dfsyslog("WANT: fd:%3d %9zd .. %9zd SIZE: %6zd", fd, Offset, Offset+Len, Len);
+  cachedstart = min(cachedstart, Offset);
+  cachedend = max(cachedend, Offset+Len);
+  return posix_fadvise(fd, Offset, Len, POSIX_FADV_WILLNEED);
 }
 
 off_t cUnbufferedFile::Seek(off_t Offset, int Whence)
@@ -1131,82 +1164,107 @@ ssize_t cUnbufferedFile::Read(void *Data, size_t Size)
   if (fd >= 0) {
 #ifdef USE_FADVISE
      off_t jumped = curpos-lastpos; // nonzero means we're not at the last offset
-     if ((cachedstart < cachedend) && (curpos < cachedstart || curpos > cachedend)) {
+     
+     dfsyslog("READ: fd:%3d %9zd .. %9zd SIZE: %6zd jump: %9zd ra: %7zd", fd, curpos, curpos+Size, Size, jumped, readahead);
+     
+     if (curpos < cachedstart || curpos > cachedend) {
         // current position is outside the cached window -- invalidate it.
-        FadviseDrop(cachedstart, cachedend-cachedstart);
+        if (cachedstart != cachedend)
+           FadviseDrop(cachedstart, cachedend-cachedstart);
         cachedstart = curpos;
         cachedend = curpos;
         }
-     cachedstart = min(cachedstart, curpos);
 #endif
      ssize_t bytesRead = safe_read(fd, Data, Size);
 #ifdef USE_FADVISE
      if (bytesRead > 0) {
+        cachedstart = min(cachedstart, curpos);
         curpos += bytesRead;
-        cachedend = max(cachedend, curpos);
-
         // Read ahead:
-        // no jump? (allow small forward jump still inside readahead window).
-        if (jumped >= 0 && jumped <= (off_t)readahead) {
-           // Trigger the readahead IO, but only if we've used at least
-           // 1/2 of the previously requested area. This avoids calling
-           // fadvise() after every read() call.
-           if (ahead - curpos < (off_t)(readahead / 2)) {
-              posix_fadvise(fd, curpos, readahead, POSIX_FADV_WILLNEED);
-              ahead = curpos + readahead;
-              cachedend = max(cachedend, ahead);
-              }
-           if (readahead < Size * 32) { // automagically tune readahead size.
-              readahead = Size * 32;
+        // no jump or small forward jump still inside readahead window.
+        if (jumped >= 0 && curpos <= cachedend) {
+           // Trigger the readahead IO, but only if we've used at least some of the previously
+           // requested area. This avoids calling fadvise() after every read() call.
+           size_t cachedsize = cachedend - curpos;
+           size_t ra = cachedsize + Size*8 + (size_t)jumped*1;
+           if (cutting)
+              ra += KILOBYTE(64);
+           ra = min(readahead, ra);
+           // Start I/O if we A) used some of the data or B) can read sufficiently large new chunk.
+           // (A) is important when starting w/ a small readahead.
+           if (cachedsize < (ra-ra/16) || cachedsize+KILOBYTE(256) <= ra)
+              FadviseRead(curpos, ra);
+           }
+        else if (jumped >= 0) {    // either large forward jump, or FF (jumps by ~4xSize)
+              FadviseRead(curpos, ((size_t)jumped < Size*8)?(jumped+Size)*2:Size*2);
+        }
+        else /*if (jumped < 0)*/ { // backward jump:
+           // We don't want any readahead, otherwise e.g. fast-rewind gets in trouble.
+#if 1
+           // But we'll do some read behind for sequential short backward jumps.
+           size_t rbsize = -jumped * 4;
+           if (lastjump <= 0 && (size_t)-jumped < Size * 16 && (size_t)((curpos-Size) - cachedstart ) < rbsize) {
+              // current position has moved back enough, grow tail window.
+              off_t start = max((off_t)0, (off_t)((curpos - Size) - rbsize));
+              FadviseRead(start, rbsize);
               }
+#endif
+           // We could reduce readahead window here. But this would lead to already
+           // prefetched data being thrown out by the code below; not exactly ideal
+           // if this jump was only caused by a play mode transition etc.
+           //readahead = Size * 8;
            }
-        else
-           ahead = curpos; // jumped -> we really don't want any readahead, otherwise e.g. fast-rewind gets in trouble.
+        cachedend = max(cachedend, curpos);
         }
 
      if (cachedstart < cachedend) {
-        if (curpos - cachedstart > READCHUNK * 2) {
+        off_t maxtail = cutting ? KILOBYTE(64) : READCHUNK;
+        off_t maxhead = max(readahead, (size_t)READCHUNK);
+        if (jumped >= 0 && curpos - cachedstart >= maxtail * 2) {
            // current position has moved forward enough, shrink tail window.
-           FadviseDrop(cachedstart, curpos - READCHUNK - cachedstart);
-           cachedstart = curpos - READCHUNK;
+           FadviseDrop(cachedstart, (curpos - maxtail) - cachedstart);
+           cachedstart = curpos - maxtail;
            }
-        else if (cachedend > ahead && cachedend - curpos > READCHUNK * 2) {
+        else if (jumped < 0 && lastjump < 0 && cachedend - curpos >= maxhead * 2) {
            // current position has moved back enough, shrink head window.
-           FadviseDrop(curpos + READCHUNK, cachedend - (curpos + READCHUNK));
-           cachedend = curpos + READCHUNK;
+           // (a large readahead value may prevent this)
+           FadviseDrop(curpos + maxhead, cachedend - (curpos + maxhead));
+           cachedend = curpos + maxhead;
            }
         }
      lastpos = curpos;
+     lastjump = jumped;
 #endif
      return bytesRead;
      }
   return -1;
 }
 
-ssize_t cUnbufferedFile::Write(const void *Data, size_t Size)
+ssize_t cUnbufferedFile::WriteBuf(const void *Data, size_t Size)
 {
   if (fd >=0) {
      ssize_t bytesWritten = safe_write(fd, Data, Size);
+     //dsyslog("WRIT: fd:%3d %9zd .. %9zd SIZE: %6zd", fd, curpos, curpos+Size, Size);
 #ifdef USE_FADVISE
      if (bytesWritten > 0) {
-        begin = min(begin, curpos);
+        cachedstart = min(cachedstart, curpos);
         curpos += bytesWritten;
         written += bytesWritten;
-        lastpos = max(lastpos, curpos);
-        if (written > WRITE_BUFFER) {
-           if (lastpos > begin) {
+        cachedend = max(cachedend, curpos);
+        if (written > writebuffer) {
+           if (cachedend > cachedstart) {
               // Now do three things:
-              // 1) Start writeback of begin..lastpos range
+              // 1) Start writeback of cachedstart..cachedend range
               // 2) Drop the already written range (by the previous fadvise call)
               // 3) Handle nonpagealigned data.
-              //    This is why we double the WRITE_BUFFER; the first time around the
+              //    This is why we double the writebuffer; the first time around the
               //    last (partial) page might be skipped, writeback will start only after
               //    second call; the third call will still include this page and finally
               //    drop it from cache.
-              off_t headdrop = min(begin, WRITE_BUFFER * 2L);
-              posix_fadvise(fd, begin - headdrop, lastpos - begin + headdrop, POSIX_FADV_DONTNEED);
+              off_t headdrop = min(cachedstart, (off_t)writebuffer * 2);
+              posix_fadvise(fd, cachedstart - headdrop, cachedend - cachedstart + headdrop, POSIX_FADV_DONTNEED);
               }
-           begin = lastpos = curpos;
+           cachedstart = cachedend = curpos;
            totwritten += written;
            written = 0;
            // The above fadvise() works when writing slowly (recording), but could
@@ -1216,7 +1274,7 @@ ssize_t cUnbufferedFile::Write(const void *Data, size_t Size)
            // So we do another round of flushing, just like above, but at larger
            // intervals -- this should catch any pages that couldn't be released
            // earlier.
-           if (totwritten > MEGABYTE(32)) {
+           if (totwritten > MEGABYTE(32) + writebuffer ) {
               // It seems in some setups, fadvise() does not trigger any I/O and
               // a fdatasync() call would be required do all the work (reiserfs with some
               // kind of write gathering enabled), but the syncs cause (io) load..
@@ -1234,6 +1292,26 @@ ssize_t cUnbufferedFile::Write(const void *Data, size_t Size)
   return -1;
 }
 
+ssize_t cUnbufferedFile::Write(const void *Data, size_t Size)
+{
+  if (!wbuf) {
+     wbuf_chunk = cutting?MEGABYTE(8):MEGABYTE(4);
+     wbuf = MALLOC(uchar,wbuf_chunk);
+     if (!wbuf)
+        return WriteBuf(Data, Size);
+     wbuf_len = 0;
+  }
+  if (Size <= wbuf_chunk-wbuf_len) {
+     memcpy(wbuf+wbuf_len, Data, Size);
+     wbuf_len += Size;
+  } else {
+     WriteBuf(wbuf, wbuf_len);
+     memcpy(wbuf, Data, Size);
+     wbuf_len = Size;
+  }
+  return Size;
+}
+
 cUnbufferedFile *cUnbufferedFile::Create(const char *FileName, int Flags, mode_t Mode)
 {
   cUnbufferedFile *File = new cUnbufferedFile;
diff --git a/tools.h b/tools.h
index 1f71ec4..ce7283c 100644
--- a/tools.h
+++ b/tools.h
@@ -246,19 +246,26 @@ private:
   off_t curpos;
   off_t cachedstart;
   off_t cachedend;
-  off_t begin;
   off_t lastpos;
-  off_t ahead;
+  off_t lastjump;
   size_t readahead;
+  uchar *wbuf;
+  int wbuf_len;
+  int wbuf_chunk;
   size_t written;
   size_t totwritten;
+  int cutting;
+  size_t writebuffer;
   int FadviseDrop(off_t Offset, off_t Len);
+  int FadviseRead(off_t Offset, off_t Len);
+  ssize_t WriteBuf(const void *Data, size_t Size);
 public:
   cUnbufferedFile(void);
   ~cUnbufferedFile();
   int Open(const char *FileName, int Flags, mode_t Mode = DEFFILEMODE);
   int Close(void);
-  void SetReadAhead(size_t ra);
+  void CuttingSrc(void);
+  void CuttingDst(void);
   off_t Seek(off_t Offset, int Whence);
   ssize_t Read(void *Data, size_t Size);
   ssize_t Write(const void *Data, size_t Size);
  

Comments

Alexw April 3, 2008, 7:10 a.m. UTC | #1
On Tuesday 01 April 2008 01:05:37 Artur Skawina wrote:
> Alexw wrote:
> >>> Sometimes the player stops in the middle of a recording due to a zero
> >>> size request. Here is the log:
> >>>
> >>> vdr: [3836] dvbplayer thread started (pid=3643, tid=3836)
> >>> vdr: [3836] resuming replay at index 1950 (0:01:18.01)
> >>> vdr: [3837] non blocking file reader thread started (pid=3643,
> >>> tid=3837) vdr: [3836] SetBrokenLink: no GOP header found in video
> >>> packet vdr: [3836] setting audio track to 1 (0)
> >>> vdr: [3836] playing
> >>> '/var/vdr/video/SERVER/recording/2008-03-28.18.58.50.50.rec/001.vdr'
> >>>
> >>> <<<unexpect stop of replay>>>
> >>>
> >>> vdr: [3837] non blocking file reader thread ended (pid=3643, tid=3837)
> >>> vdr: [3836] dvbplayer thread ended (pid=3643, tid=3836)
> >>>
> >>> vdr: [5618] WANT: fd: 25 1068536495 .. 1068722913 SIZE: 186418
> >>> vdr: [5618] READ: fd: 25 1068536495 .. 1068666704 SIZE: 130209 jump:   
> >>>      0 ra: 12582912 vdr: [5618] WANT: fd: 25 1068666704 .. 1068983331
> >>> SIZE: 316627 vdr: [5618] READ: fd: 25 1068666704 .. 1068680058 SIZE: 
> >>> 13354 jump:         0 ra: 12582912 vdr: [5618] READ: fd: 25 1068680058
> >>> .. 1068690344 SIZE:  10286 jump:         0 ra: 12582912 vdr: [5618]
> >>> READ: fd: 25 1068690344 .. 1068721839 SIZE:  31495 jump:         0 ra:
> >>> 12582912 vdr: [5618] READ: fd: 25 1068721839 .. 1069246127 SIZE: 524288
> >>> jump:         0 ra: 12582912 vdr: [5618] WANT: fd: 25 1069246127 ..
> >>> 1070294703 SIZE: 1048576 vdr: [5618] READ: fd: 25 1069246127 ..
> >>> 1069246127 SIZE:      0 jump:         0 ra: 12582912 vdr: [5618] non
> >>> blocking file reader thread ended (pid=5563, tid=5618) vdr: [5617]
> >>> dvbplayer thread ended (pid=5563, tid=5617)
> >>
> >> Weird, cUnbufferedFile::Read(Size=0). I'll try to reproduce this.
> >
> > Sometimes it take a long time to occur, sometimes not.
>
> Did this start after applying my patch, or did it happen in the past too?
> Does it always happen at a certain position? Specific stream or bitrate?
> I don't recall ever having a similar problem, the number 524288 looks a bit
> suspicious...
>

It has been happening in the past. It is not related to the position in the stream nor to the bitrate. It seems to be related to the fact the dvbplayer is starving at this time. I am suspicious about 
blocking IO read somewhere.

I did look at the requested read size and notice that big chunk request are linked to the playback freeze.

vdr: [23223] **** trigger read with 23176
vdr: [23223] **** trigger read with 52353
vdr: [23223] **** trigger read with 21065
vdr: [23223] **** trigger read with 22996
vdr: [23223] **** trigger read with 51949
vdr: [23223] **** trigger read with 27058
vdr: [23223] **** trigger read with 24513
vdr: [23223] **** trigger read with 50280
vdr: [23223] **** trigger read with 18108
vdr: [23223] **** trigger read with 20406
vdr: [23223] **** trigger read with 101168 <-- picture freeze
vdr: [23223] **** trigger read with 20742
vdr: [23223] **** trigger read with 20922
vdr: [23223] **** trigger read with 49968
vdr: [23223] **** trigger read with 21879
vdr: [23223] **** trigger read with 21630
vdr: [23223] **** trigger read with 56692
vdr: [23223] **** trigger read with 24832
vdr: [23223] **** trigger read with 23184
vdr: [23223] **** trigger read with 59373
vdr: [23223] **** trigger read with 24832
vdr: [23223] **** trigger read with 25949
vdr: [23223] **** trigger read with 62326
vdr: [23223] **** trigger read with 27614
vdr: [23223] **** trigger read with 25029
vdr: [23223] **** trigger read with 56620
vdr: [23223] **** trigger read with 27066
vdr: [23223] **** trigger read with 22845
vdr: [23223] **** trigger read with 112627 <-- picture freeze
vdr: [23223] **** trigger read with 29840


> >>> As you can see the requested size is increasing until it reaches the
> >>> max buf. This is also a period with freezes in the video (late
> >>> delivery).
> >>
> >> Do these problems (0-sized reads) occur only near the end of a program
> >> being recorded?
> >
> > No, you can experience a stop in the middle of a recording.
> >
> >> Also, I see from the above that the readahead code needs to be more aggressive:
> >>> vdr: [5627] WANT: fd: 25 1188531493 .. 1188861741 SIZE: 330248
> >>
> >> [... small reads...]
> >>
> >>> vdr: [5627] READ: fd: 25 1188616808 .. 1189141096 SIZE: 524288 jump:   
> >>>      0 ra: 12582912
> >>
> >> the readahead window does not cover the area which is being read later
> >> -- this certainly is likely to stall playback. I'll fix this (i did not
> >> expect such a large difference in read request sizes.)
>
> The attached patch makes the readahead window grow much faster, this will
> cause more I/O at the start of playback, but should handle cases like the
> one above better. If it works correctly all the ranges mentioned in "READ:"
> lines should be inside the preceding "WANT:" range and the playback
> shouldn't stall.

I am reaching the maximum readahead window faster (almost 10s after playback starts) with the new patch.

vdr: [15502] WANT: fd: 26 1004377827 .. 1015728543 SIZE: 11350716
vdr: [15502] WANT: fd: 26 1004535337 .. 1015998967 SIZE: 11463630
vdr: [15502] WANT: fd: 26 1004682706 .. 1016274903 SIZE: 11592197
vdr: [15502] WANT: fd: 26 1004739635 .. 1016553335 SIZE: 11813700
vdr: [15502] WANT: fd: 26 1004839932 .. 1016943423 SIZE: 12103491
vdr: [15502] WANT: fd: 26 1004970331 .. 1017333735 SIZE: 12363404
vdr: [15502] WANT: fd: 26 1005150982 .. 1017733894 SIZE: 12582912
vdr: [15502] WANT: fd: 26 1005505020 .. 1018075966 SIZE: 12570946
vdr: [15502] WANT: fd: 26 1005797742 .. 1018344142 SIZE: 12546400
vdr: [15502] WANT: fd: 26 1006185614 .. 1018631686 SIZE: 12446072
vdr: [15502] WANT: fd: 26 1006351675 .. 1018919902 SIZE: 12568227
vdr: [15502] WANT: fd: 26 1006677271 .. 1019215894 SIZE: 12538623
vdr: [15502] WANT: fd: 26 1007017481 .. 1019548174 SIZE: 12530693
vdr: [15502] WANT: fd: 26 1007229411 .. 1019812323 SIZE: 12582912
vdr: [15502] WANT: fd: 26 1007530282 .. 1020077003 SIZE: 12546721
vdr: [15502] WANT: fd: 26 1007792199 .. 1020375111 SIZE: 12582912
vdr: [15502] WANT: fd: 26 1008063657 .. 1020646569 SIZE: 12582912
vdr: [15502] WANT: fd: 26 1008339261 .. 1020922173 SIZE: 12582912
vdr: [15502] WANT: fd: 26 1008611553 .. 1021194465 SIZE: 12582912
vdr: [15502] WANT: fd: 26 1008909033 .. 1021491945 SIZE: 12582912
vdr: [15502] WANT: fd: 26 1009225795 .. 1021808707 SIZE: 12582912
vdr: [15502] WANT: fd: 26 1009559062 .. 1022141974 SIZE: 12582912
vdr: [15502] WANT: fd: 26 1009941453 .. 1022441318 SIZE: 12499865
vdr: [15502] WANT: fd: 26 1010248984 .. 1022802262 SIZE: 12553278


> Here the readahead window grows to ~5Mbytes just after starting playback,
> i still need to check that this is not too fast, doesn't saturate the disk
> and/or link and cause delays when jumping etc. Tested by playing a few
> files from an NFS mount, didn't notice any problems so far.
>

This is also working fine here. No delay when jumping and the start of the playback is OK.

> An incremental patch would look like this (the attached one (vs 1.4.7)
> already includes it):
>
> diff --git a/tools.c b/tools.c
> index a14f799..e22614f 100644
> --- a/tools.c
> +++ b/tools.c
> @@ -1186,13 +1186,13 @@ ssize_t cUnbufferedFile::Read(void *Data, size_t
> Size) // Trigger the readahead IO, but only if we've used at least some of
> the previously // requested area. This avoids calling fadvise() after every
> read() call. size_t cachedsize = cachedend - curpos;
> -           size_t ra = cachedsize + Size*2 + (size_t)jumped*1;
> +           size_t ra = cachedsize + Size*8 + (size_t)jumped*1;
>             if (cutting)
>                ra += KILOBYTE(64);
>             ra = min(readahead, ra);
>             // Start I/O if we A) used some of the data or B) can read
> sufficiently large new chunk. // (A) is important when starting w/ a small
> readahead. -           if (cachedsize < (ra-ra/4) ||
> cachedsize+KILOBYTE(256) <= ra) +           if (cachedsize < (ra-ra/16) ||
> cachedsize+KILOBYTE(256) <= ra) FadviseRead(curpos, ra);
>             }
>          else if (jumped >= 0) {    // either large forward jump, or FF
> (jumps by ~4xSize)
>
> artur


Client's log extract (startup):

vdr: [25486] READ: fd: 27 803118549 .. 803146397 SIZE:  27848 jump:         0 ra: 12582912
vdr: [25486] READ: fd: 27 803146397 .. 803191530 SIZE:  45133 jump:         0 ra: 12582912
vdr: [25486] WANT: fd: 27 803191530 .. 808584629 SIZE: 5393099
vdr: [25486] READ: fd: 27 803191530 .. 803212652 SIZE:  21122 jump:         0 ra: 12582912
vdr: [25486] READ: fd: 27 803212652 .. 803234315 SIZE:  21663 jump:         0 ra: 12582912
vdr: [25486] READ: fd: 27 803234315 .. 803289817 SIZE:  55502 jump:         0 ra: 12582912
vdr: [25486] WANT: fd: 27 803289817 .. 809028645 SIZE: 5738828
vdr: [25486] READ: fd: 27 803289817 .. 803312817 SIZE:  23000 jump:         0 ra: 12582912
vdr: [25486] READ: fd: 27 803312817 .. 803334701 SIZE:  21884 jump:         0 ra: 12582912
vdr: [25486] READ: fd: 27 803334701 .. 803409850 SIZE:  75149 jump:         0 ra: 12582912
vdr: [25486] WANT: fd: 27 803409850 .. 809629837 SIZE: 6219987
vdr: [25486] READ: fd: 27 803409850 .. 803430400 SIZE:  20550 jump:         0 ra: 12582912
vdr: [25486] READ: fd: 27 803430400 .. 803450775 SIZE:  20375 jump:         0 ra: 12582912
vdr: [25486] READ: fd: 27 803450775 .. 803495691 SIZE:  44916 jump:         0 ra: 12582912
vdr: [25486] WANT: fd: 27 803495691 .. 809989165 SIZE: 6493474
vdr: [25486] READ: fd: 27 803495691 .. 803516928 SIZE:  21237 jump:         0 ra: 12582912
vdr: [25486] READ: fd: 27 803516928 .. 803560475 SIZE:  43547 jump:         0 ra: 12582912
vdr: [25486] WANT: fd: 27 803560475 .. 810337541 SIZE: 6777066
vdr: [25486] READ: fd: 27 803560475 .. 803579793 SIZE:  19318 jump:         0 ra: 12582912
vdr: [25486] READ: fd: 27 803579793 .. 803626190 SIZE:  46397 jump:         0 ra: 12582912
vdr: [25486] WANT: fd: 27 803626190 .. 810708717 SIZE: 7082527
vdr: [25486] READ: fd: 27 803626190 .. 803647444 SIZE:  21254 jump:         0 ra: 12582912
vdr: [25486] READ: fd: 27 803647444 .. 803691436 SIZE:  43992 jump:         0 ra: 12582912
vdr: [25486] WANT: fd: 27 803691436 .. 811060653 SIZE: 7369217
vdr: [25486] READ: fd: 27 803691436 .. 803711102 SIZE:  19666 jump:         0 ra: 12582912
vdr: [25486] READ: fd: 27 803711102 .. 803753355 SIZE:  42253 jump:         0 ra: 12582912
vdr: [25486] WANT: fd: 27 803753355 .. 811398677 SIZE: 7645322
vdr: [25486] READ: fd: 27 803753355 .. 803772657 SIZE:  19302 jump:         0 ra: 12582912
vdr: [25486] READ: fd: 27 803772657 .. 803829583 SIZE:  56926 jump:         0 ra: 12582912
vdr: [25486] WANT: fd: 27 803829583 .. 811854085 SIZE: 8024502
vdr: [25486] READ: fd: 27 803829583 .. 803852980 SIZE:  23397 jump:         0 ra: 12582912
vdr: [25486] READ: fd: 27 803852980 .. 803877264 SIZE:  24284 jump:         0 ra: 12582912
vdr: [25486] READ: fd: 27 803877264 .. 803931256 SIZE:  53992 jump:         0 ra: 12582912
vdr: [25486] WANT: fd: 27 803931256 .. 812286021 SIZE: 8354765
vdr: [25486] READ: fd: 27 803931256 .. 803953880 SIZE:  22624 jump:         0 ra: 12582912
vdr: [25486] READ: fd: 27 803953880 .. 803975350 SIZE:  21470 jump:         0 ra: 12582912


Is it possible to log the status of the mpeg2 decoder feeder? 

Cheers,

Alex
  

Patch

diff --git a/tools.c b/tools.c
index a14f799..e22614f 100644
--- a/tools.c
+++ b/tools.c
@@ -1186,13 +1186,13 @@  ssize_t cUnbufferedFile::Read(void *Data, size_t Size)
            // Trigger the readahead IO, but only if we've used at least some of the previously
            // requested area. This avoids calling fadvise() after every read() call.
            size_t cachedsize = cachedend - curpos;
-           size_t ra = cachedsize + Size*2 + (size_t)jumped*1;
+           size_t ra = cachedsize + Size*8 + (size_t)jumped*1;
            if (cutting)
               ra += KILOBYTE(64);
            ra = min(readahead, ra);
            // Start I/O if we A) used some of the data or B) can read sufficiently large new chunk.
            // (A) is important when starting w/ a small readahead.
-           if (cachedsize < (ra-ra/4) || cachedsize+KILOBYTE(256) <= ra)
+           if (cachedsize < (ra-ra/16) || cachedsize+KILOBYTE(256) <= ra)
               FadviseRead(curpos, ra);
            }
         else if (jumped >= 0) {    // either large forward jump, or FF (jumps by ~4xSize)