From patchwork Mon Mar 31 23:05:37 2008 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Artur Skawina X-Patchwork-Id: 12640 Received: from mx10.go2.pl ([193.17.41.74] helo=poczta.o2.pl) by www.linuxtv.org with esmtp (Exim 4.63) (envelope-from ) id 1JgT4R-00082P-U1 for vdr@linuxtv.org; Tue, 01 Apr 2008 01:05:49 +0200 Received: from poczta.o2.pl (mx10.go2.pl [127.0.0.1]) by poczta.o2.pl (Postfix) with ESMTP id 1C5795800B for ; Tue, 1 Apr 2008 01:05:39 +0200 (CEST) Received: from [172.19.43.221] (ip-89-174-40-83.multimo.gtsenergis.pl [89.174.40.83]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by poczta.o2.pl (Postfix) with ESMTP for ; Tue, 1 Apr 2008 01:05:38 +0200 (CEST) Message-ID: <47F16E41.7010301@o2.pl> Date: Tue, 01 Apr 2008 01:05:37 +0200 From: Artur Skawina User-Agent: Thunderbird 2.0.0.14pre (X11/20080323) MIME-Version: 1.0 To: VDR Mailing List References: <47EA8784.2040900@o2.pl> <47EBD7DB.4090001@o2.pl> <200803281452.38663.alexw@undercover.mine.nu> <200803310942.16550.alexw@undercover.mine.nu> <47F1091B.8010208@o2.pl> <47F11760.6020906@undercover.mine.nu> In-Reply-To: <47F11760.6020906@undercover.mine.nu> X-LSpam-Score: -2.6 (--) X-LSpam-Report: No, score=-2.6 required=5.0 tests=BAYES_00=-2.599 autolearn=ham Subject: Re: [vdr] Problems playing ongoing recordings? X-BeenThere: vdr@linuxtv.org X-Mailman-Version: 2.1.9 Precedence: list Reply-To: VDR Mailing List List-Id: VDR Mailing List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 31 Mar 2008 23:05:53 -0000 Status: O X-Status: X-Keywords: X-UID: 16323 Alexw wrote: >>> Sometimes the player stops in the middle of a recording due to a zero size request. Here is the log: >>> >>> vdr: [3836] dvbplayer thread started (pid=3643, tid=3836) >>> vdr: [3836] resuming replay at index 1950 (0:01:18.01) >>> vdr: [3837] non blocking file reader thread started (pid=3643, tid=3837) >>> vdr: [3836] SetBrokenLink: no GOP header found in video packet >>> vdr: [3836] setting audio track to 1 (0) >>> vdr: [3836] playing '/var/vdr/video/SERVER/recording/2008-03-28.18.58.50.50.rec/001.vdr' >>> >>> <<>> >>> >>> vdr: [3837] non blocking file reader thread ended (pid=3643, tid=3837) >>> vdr: [3836] dvbplayer thread ended (pid=3643, tid=3836) >>> vdr: [5618] WANT: fd: 25 1068536495 .. 1068722913 SIZE: 186418 >>> vdr: [5618] READ: fd: 25 1068536495 .. 1068666704 SIZE: 130209 jump: 0 ra: 12582912 >>> vdr: [5618] WANT: fd: 25 1068666704 .. 1068983331 SIZE: 316627 >>> vdr: [5618] READ: fd: 25 1068666704 .. 1068680058 SIZE: 13354 jump: 0 ra: 12582912 >>> vdr: [5618] READ: fd: 25 1068680058 .. 1068690344 SIZE: 10286 jump: 0 ra: 12582912 >>> vdr: [5618] READ: fd: 25 1068690344 .. 1068721839 SIZE: 31495 jump: 0 ra: 12582912 >>> vdr: [5618] READ: fd: 25 1068721839 .. 1069246127 SIZE: 524288 jump: 0 ra: 12582912 >>> vdr: [5618] WANT: fd: 25 1069246127 .. 1070294703 SIZE: 1048576 >>> vdr: [5618] READ: fd: 25 1069246127 .. 1069246127 SIZE: 0 jump: 0 ra: 12582912 >>> vdr: [5618] non blocking file reader thread ended (pid=5563, tid=5618) >>> vdr: [5617] dvbplayer thread ended (pid=5563, tid=5617) >>> >> Weird, cUnbufferedFile::Read(Size=0). I'll try to reproduce this. >> > Sometimes it take a long time to occur, sometimes not. Did this start after applying my patch, or did it happen in the past too? Does it always happen at a certain position? Specific stream or bitrate? I don't recall ever having a similar problem, the number 524288 looks a bit suspicious... >>> As you can see the requested size is increasing until it reaches the max buf. This is also a period with freezes in the video (late delivery). >>> >> Do these problems (0-sized reads) occur only near the end of a program being recorded? >> > No, you can experience a stop in the middle of a recording. > >> Also, I see from the above that the readahead code needs to be more aggressive: >> >>> vdr: [5627] WANT: fd: 25 1188531493 .. 1188861741 SIZE: 330248 >> [... small reads...] >>> vdr: [5627] READ: fd: 25 1188616808 .. 1189141096 SIZE: 524288 jump: 0 ra: 12582912 >>> >> the readahead window does not cover the area which is being read later -- this certainly >> is likely to stall playback. I'll fix this (i did not expect such a large difference in >> read request sizes.) The attached patch makes the readahead window grow much faster, this will cause more I/O at the start of playback, but should handle cases like the one above better. If it works correctly all the ranges mentioned in "READ:" lines should be inside the preceding "WANT:" range and the playback shouldn't stall. Here the readahead window grows to ~5Mbytes just after starting playback, i still need to check that this is not too fast, doesn't saturate the disk and/or link and cause delays when jumping etc. Tested by playing a few files from an NFS mount, didn't notice any problems so far. An incremental patch would look like this (the attached one (vs 1.4.7) already includes it): artur diff --git a/cutter.c b/cutter.c index 5170ae4..7e2e506 100644 --- a/cutter.c +++ b/cutter.c @@ -66,7 +66,8 @@ void cCuttingThread::Action(void) toFile = toFileName->Open(); if (!fromFile || !toFile) return; - fromFile->SetReadAhead(MEGABYTE(20)); + fromFile->CuttingSrc(); + toFile->CuttingDst(); int Index = Mark->position; Mark = fromMarks.Next(Mark); int FileSize = 0; @@ -91,7 +92,7 @@ void cCuttingThread::Action(void) if (fromIndex->Get(Index++, &FileNumber, &FileOffset, &PictureType, &Length)) { if (FileNumber != CurrentFileNumber) { fromFile = fromFileName->SetOffset(FileNumber, FileOffset); - fromFile->SetReadAhead(MEGABYTE(20)); + fromFile->CuttingSrc(); CurrentFileNumber = FileNumber; } if (fromFile) { @@ -124,6 +125,7 @@ void cCuttingThread::Action(void) error = "toFile 1"; break; } + toFile->CuttingDst(); FileSize = 0; } LastIFrame = 0; @@ -164,6 +166,7 @@ void cCuttingThread::Action(void) error = "toFile 2"; break; } + toFile->CuttingDst(); FileSize = 0; } } diff --git a/tools.c b/tools.c index 255c806..e22614f 100644 --- a/tools.c +++ b/tools.c @@ -1055,7 +1055,8 @@ bool cSafeFile::Close(void) #define USE_FADVISE -#define WRITE_BUFFER KILOBYTE(800) +//#define dfsyslog dsyslog // uncomment to turn on fadvise related logging +#define dfsyslog(a...) do {} while (0) cUnbufferedFile::cUnbufferedFile(void) { @@ -1073,12 +1074,19 @@ int cUnbufferedFile::Open(const char *FileName, int Flags, mode_t Mode) fd = open(FileName, Flags, Mode); curpos = 0; #ifdef USE_FADVISE - begin = lastpos = ahead = 0; - cachedstart = 0; - cachedend = 0; - readahead = KILOBYTE(128); + lastpos = 0; + lastjump = 0; + cachedstart = cachedend = 0; + // 4M readahead seems to work for playback while cutting, + // but isn't quite enough for FF while cutting... + readahead = MEGABYTE(12); + writebuffer = KILOBYTE(2000); written = 0; totwritten = 0; + cutting = 0; + + wbuf = NULL; + if (fd >= 0) posix_fadvise(fd, 0, 0, POSIX_FADV_RANDOM); // we could use POSIX_FADV_SEQUENTIAL, but we do our own readahead, disabling the kernel one. #endif @@ -1087,6 +1095,11 @@ int cUnbufferedFile::Open(const char *FileName, int Flags, mode_t Mode) int cUnbufferedFile::Close(void) { + if (fd >= 0 && wbuf) { + if (wbuf_len) + WriteBuf(wbuf, wbuf_len); + free(wbuf); + } #ifdef USE_FADVISE if (fd >= 0) { if (totwritten) // if we wrote anything make sure the data has hit the disk before @@ -1107,15 +1120,35 @@ int cUnbufferedFile::Close(void) #define FADVGRAN KILOBYTE(4) // AKA fadvise-chunk-size; PAGE_SIZE or getpagesize(2) would also work. #define READCHUNK MEGABYTE(8) -void cUnbufferedFile::SetReadAhead(size_t ra) +void cUnbufferedFile::CuttingSrc(void) +{ + readahead = MEGABYTE(20); + cutting = 1; +} + +void cUnbufferedFile::CuttingDst(void) { - readahead = ra; + writebuffer = MEGABYTE(16); // 4M is too little. + cutting = 2; } int cUnbufferedFile::FadviseDrop(off_t Offset, off_t Len) { - // rounding up the window to make sure that not PAGE_SIZE-aligned data gets freed. - return posix_fadvise(fd, Offset - (FADVGRAN - 1), Len + (FADVGRAN - 1) * 2, POSIX_FADV_DONTNEED); + // Round up the window to make sure that not PAGE_SIZE-aligned data gets freed. + // Note that that also means calling this with Len==0 isn't special (unlike fadvise). + dfsyslog("DROP: fd:%3d %9zd .. %9zd SIZE: %6zd", fd, Offset, Offset+Len, Len); + off_t prewin = min(Offset ,(off_t)FADVGRAN - 1); // must not wrap below 0. + return posix_fadvise(fd, Offset - prewin, Len + prewin + (FADVGRAN - 1), POSIX_FADV_DONTNEED); +} + +// Trigger background readahead on the specified range and add +// it to the "cached" area so that we can drop the data later. +int cUnbufferedFile::FadviseRead(off_t Offset, off_t Len) +{ + dfsyslog("WANT: fd:%3d %9zd .. %9zd SIZE: %6zd", fd, Offset, Offset+Len, Len); + cachedstart = min(cachedstart, Offset); + cachedend = max(cachedend, Offset+Len); + return posix_fadvise(fd, Offset, Len, POSIX_FADV_WILLNEED); } off_t cUnbufferedFile::Seek(off_t Offset, int Whence) @@ -1131,82 +1164,107 @@ ssize_t cUnbufferedFile::Read(void *Data, size_t Size) if (fd >= 0) { #ifdef USE_FADVISE off_t jumped = curpos-lastpos; // nonzero means we're not at the last offset - if ((cachedstart < cachedend) && (curpos < cachedstart || curpos > cachedend)) { + + dfsyslog("READ: fd:%3d %9zd .. %9zd SIZE: %6zd jump: %9zd ra: %7zd", fd, curpos, curpos+Size, Size, jumped, readahead); + + if (curpos < cachedstart || curpos > cachedend) { // current position is outside the cached window -- invalidate it. - FadviseDrop(cachedstart, cachedend-cachedstart); + if (cachedstart != cachedend) + FadviseDrop(cachedstart, cachedend-cachedstart); cachedstart = curpos; cachedend = curpos; } - cachedstart = min(cachedstart, curpos); #endif ssize_t bytesRead = safe_read(fd, Data, Size); #ifdef USE_FADVISE if (bytesRead > 0) { + cachedstart = min(cachedstart, curpos); curpos += bytesRead; - cachedend = max(cachedend, curpos); - // Read ahead: - // no jump? (allow small forward jump still inside readahead window). - if (jumped >= 0 && jumped <= (off_t)readahead) { - // Trigger the readahead IO, but only if we've used at least - // 1/2 of the previously requested area. This avoids calling - // fadvise() after every read() call. - if (ahead - curpos < (off_t)(readahead / 2)) { - posix_fadvise(fd, curpos, readahead, POSIX_FADV_WILLNEED); - ahead = curpos + readahead; - cachedend = max(cachedend, ahead); - } - if (readahead < Size * 32) { // automagically tune readahead size. - readahead = Size * 32; + // no jump or small forward jump still inside readahead window. + if (jumped >= 0 && curpos <= cachedend) { + // Trigger the readahead IO, but only if we've used at least some of the previously + // requested area. This avoids calling fadvise() after every read() call. + size_t cachedsize = cachedend - curpos; + size_t ra = cachedsize + Size*8 + (size_t)jumped*1; + if (cutting) + ra += KILOBYTE(64); + ra = min(readahead, ra); + // Start I/O if we A) used some of the data or B) can read sufficiently large new chunk. + // (A) is important when starting w/ a small readahead. + if (cachedsize < (ra-ra/16) || cachedsize+KILOBYTE(256) <= ra) + FadviseRead(curpos, ra); + } + else if (jumped >= 0) { // either large forward jump, or FF (jumps by ~4xSize) + FadviseRead(curpos, ((size_t)jumped < Size*8)?(jumped+Size)*2:Size*2); + } + else /*if (jumped < 0)*/ { // backward jump: + // We don't want any readahead, otherwise e.g. fast-rewind gets in trouble. +#if 1 + // But we'll do some read behind for sequential short backward jumps. + size_t rbsize = -jumped * 4; + if (lastjump <= 0 && (size_t)-jumped < Size * 16 && (size_t)((curpos-Size) - cachedstart ) < rbsize) { + // current position has moved back enough, grow tail window. + off_t start = max((off_t)0, (off_t)((curpos - Size) - rbsize)); + FadviseRead(start, rbsize); } +#endif + // We could reduce readahead window here. But this would lead to already + // prefetched data being thrown out by the code below; not exactly ideal + // if this jump was only caused by a play mode transition etc. + //readahead = Size * 8; } - else - ahead = curpos; // jumped -> we really don't want any readahead, otherwise e.g. fast-rewind gets in trouble. + cachedend = max(cachedend, curpos); } if (cachedstart < cachedend) { - if (curpos - cachedstart > READCHUNK * 2) { + off_t maxtail = cutting ? KILOBYTE(64) : READCHUNK; + off_t maxhead = max(readahead, (size_t)READCHUNK); + if (jumped >= 0 && curpos - cachedstart >= maxtail * 2) { // current position has moved forward enough, shrink tail window. - FadviseDrop(cachedstart, curpos - READCHUNK - cachedstart); - cachedstart = curpos - READCHUNK; + FadviseDrop(cachedstart, (curpos - maxtail) - cachedstart); + cachedstart = curpos - maxtail; } - else if (cachedend > ahead && cachedend - curpos > READCHUNK * 2) { + else if (jumped < 0 && lastjump < 0 && cachedend - curpos >= maxhead * 2) { // current position has moved back enough, shrink head window. - FadviseDrop(curpos + READCHUNK, cachedend - (curpos + READCHUNK)); - cachedend = curpos + READCHUNK; + // (a large readahead value may prevent this) + FadviseDrop(curpos + maxhead, cachedend - (curpos + maxhead)); + cachedend = curpos + maxhead; } } lastpos = curpos; + lastjump = jumped; #endif return bytesRead; } return -1; } -ssize_t cUnbufferedFile::Write(const void *Data, size_t Size) +ssize_t cUnbufferedFile::WriteBuf(const void *Data, size_t Size) { if (fd >=0) { ssize_t bytesWritten = safe_write(fd, Data, Size); + //dsyslog("WRIT: fd:%3d %9zd .. %9zd SIZE: %6zd", fd, curpos, curpos+Size, Size); #ifdef USE_FADVISE if (bytesWritten > 0) { - begin = min(begin, curpos); + cachedstart = min(cachedstart, curpos); curpos += bytesWritten; written += bytesWritten; - lastpos = max(lastpos, curpos); - if (written > WRITE_BUFFER) { - if (lastpos > begin) { + cachedend = max(cachedend, curpos); + if (written > writebuffer) { + if (cachedend > cachedstart) { // Now do three things: - // 1) Start writeback of begin..lastpos range + // 1) Start writeback of cachedstart..cachedend range // 2) Drop the already written range (by the previous fadvise call) // 3) Handle nonpagealigned data. - // This is why we double the WRITE_BUFFER; the first time around the + // This is why we double the writebuffer; the first time around the // last (partial) page might be skipped, writeback will start only after // second call; the third call will still include this page and finally // drop it from cache. - off_t headdrop = min(begin, WRITE_BUFFER * 2L); - posix_fadvise(fd, begin - headdrop, lastpos - begin + headdrop, POSIX_FADV_DONTNEED); + off_t headdrop = min(cachedstart, (off_t)writebuffer * 2); + posix_fadvise(fd, cachedstart - headdrop, cachedend - cachedstart + headdrop, POSIX_FADV_DONTNEED); } - begin = lastpos = curpos; + cachedstart = cachedend = curpos; totwritten += written; written = 0; // The above fadvise() works when writing slowly (recording), but could @@ -1216,7 +1274,7 @@ ssize_t cUnbufferedFile::Write(const void *Data, size_t Size) // So we do another round of flushing, just like above, but at larger // intervals -- this should catch any pages that couldn't be released // earlier. - if (totwritten > MEGABYTE(32)) { + if (totwritten > MEGABYTE(32) + writebuffer ) { // It seems in some setups, fadvise() does not trigger any I/O and // a fdatasync() call would be required do all the work (reiserfs with some // kind of write gathering enabled), but the syncs cause (io) load.. @@ -1234,6 +1292,26 @@ ssize_t cUnbufferedFile::Write(const void *Data, size_t Size) return -1; } +ssize_t cUnbufferedFile::Write(const void *Data, size_t Size) +{ + if (!wbuf) { + wbuf_chunk = cutting?MEGABYTE(8):MEGABYTE(4); + wbuf = MALLOC(uchar,wbuf_chunk); + if (!wbuf) + return WriteBuf(Data, Size); + wbuf_len = 0; + } + if (Size <= wbuf_chunk-wbuf_len) { + memcpy(wbuf+wbuf_len, Data, Size); + wbuf_len += Size; + } else { + WriteBuf(wbuf, wbuf_len); + memcpy(wbuf, Data, Size); + wbuf_len = Size; + } + return Size; +} + cUnbufferedFile *cUnbufferedFile::Create(const char *FileName, int Flags, mode_t Mode) { cUnbufferedFile *File = new cUnbufferedFile; diff --git a/tools.h b/tools.h index 1f71ec4..ce7283c 100644 --- a/tools.h +++ b/tools.h @@ -246,19 +246,26 @@ private: off_t curpos; off_t cachedstart; off_t cachedend; - off_t begin; off_t lastpos; - off_t ahead; + off_t lastjump; size_t readahead; + uchar *wbuf; + int wbuf_len; + int wbuf_chunk; size_t written; size_t totwritten; + int cutting; + size_t writebuffer; int FadviseDrop(off_t Offset, off_t Len); + int FadviseRead(off_t Offset, off_t Len); + ssize_t WriteBuf(const void *Data, size_t Size); public: cUnbufferedFile(void); ~cUnbufferedFile(); int Open(const char *FileName, int Flags, mode_t Mode = DEFFILEMODE); int Close(void); - void SetReadAhead(size_t ra); + void CuttingSrc(void); + void CuttingDst(void); off_t Seek(off_t Offset, int Whence); ssize_t Read(void *Data, size_t Size); ssize_t Write(const void *Data, size_t Size); diff --git a/tools.c b/tools.c index a14f799..e22614f 100644 --- a/tools.c +++ b/tools.c @@ -1186,13 +1186,13 @@ ssize_t cUnbufferedFile::Read(void *Data, size_t Size) // Trigger the readahead IO, but only if we've used at least some of the previously // requested area. This avoids calling fadvise() after every read() call. size_t cachedsize = cachedend - curpos; - size_t ra = cachedsize + Size*2 + (size_t)jumped*1; + size_t ra = cachedsize + Size*8 + (size_t)jumped*1; if (cutting) ra += KILOBYTE(64); ra = min(readahead, ra); // Start I/O if we A) used some of the data or B) can read sufficiently large new chunk. // (A) is important when starting w/ a small readahead. - if (cachedsize < (ra-ra/4) || cachedsize+KILOBYTE(256) <= ra) + if (cachedsize < (ra-ra/16) || cachedsize+KILOBYTE(256) <= ra) FadviseRead(curpos, ra); } else if (jumped >= 0) { // either large forward jump, or FF (jumps by ~4xSize)