RGB/PAL over VGA at variable frame rate

Message ID 20080722163705.GA17041@roja.toh.cx
State New
Headers

Commit Message

thomas July 22, 2008, 4:37 p.m. UTC
  Hi list,

the last few days I made some interesting experiences with VGA cards I
now want to share with you.

goal
----

develop a budget card based VDR with PAL/RGB output and FF like output quality

problem
-------

as we all know current VGA graphics output quality suffers from certain
limitations. Graphics cards known so far operate at a fixed frame rate 
not properly synchronized with the stream.
Thus fields or even frames do often not appear the right time at the ouput. 
Some are doubled others are lost. Finally leading to more or less jerky 
playback.

To a certain degree you can workaround this by software deinterlacing.
At the cost of worse picture quality when playing interlaced material. Also
CPU load is considerably increased by that.

It appeared to be a privilege of so called full featured cards (expensive cards
running proprietary firmware) to output true RGB PAL at variable framerate.
Thus always providing full stream synchronicity.

I've always been bothered by that and finally started to develop a few patches
with the goal in mind to overcome these VGA graphics limitations. 

solution
--------

graphics cards basically are not designed for variable frame rates. Once
you have setup their timing you are not provided any means like registers to 
synchronize the frame rate with external timers. But that's exactly what's 
needed for signal output to stay in sync with the frame rate provided by 
xine-lib or other software decoders.

To extend/reduce the overall time between vertical retrace I first 
dynamically added/removed a few scanlines to the modeline but with bad 
results. By doing so the picture was visibly jumping on the TV set.

After some further experimenting I finally found a solution to fine adjust the
frame rate of my elderly Radeon type card. This time without any bad side 
effects on the screen.

Just trimming the length of a few scanlines during vertical retrace
period does the trick.

Then I tried to implement the new functionality by applying only minimum
changes to my current VDR development system. Radeon DRM driver is perfectly
suited for that. I just had to add a few lines of code there.  

I finally ended up in a small patch against Radeon DRM driver and a even
smaller one against xine-lib. The last one also could take place directly
in the Xserver. Please see attachments for code samples.

When xine-lib calls PutImage() it checks whether to increase/decrease
Xservers frame rate. This way after a short adaption phase xine-lib can
place it's PutImage() calls right in the middle between 2 adjacent vertical
blanking intervals. This provides maximum immunity against jitter. And
even better: no more frames/fields are lost due to stream and graphics
card frequency drift.

Because we now cease from any deinterlacing we enjoy discontinuation of
all its disadvantages:

If driving a device with native interlaced input (e.g. a traditional TV Set 
or modern TFT with good RGB support) we have no deinterlacing artifacts 
anymore.

Since softdecoders now are relieved of any CPU intensive deinterlacing 
we now can build cheap budget card based VDRs with slow CPUs. 

Please find attached 2 small patches showing you the basic idea and a 
description of my test environment. The project is far from complete but 
even at this early stage of development shows promising results.

It should give you some rough ideas how to recycle your old hardware to a 
smoothly running budget VDR with high quality RGB video output.

some suggestions what to do next:
- detection of initial field parity
- faster initial frame rate synchronisation after starting replay
- remove some hard coded constants (special dependencies on my system's timing)

Some more information about the project is also available here
http://www.vdr-portal.de/board/thread.php?threadid=78480

Currently it's all based on Radeons but I'll try to also port it to other
type of VGA cards. There will be some updates in the near future. stay tuned.

-Thomas

diff -ru drivers/char/drm.org/radeon_drm.h drivers/char/drm/radeon_drm.h
--- drivers/char/drm.org/radeon_drm.h	2008-01-24 23:58:37.000000000 +0100
+++ drivers/char/drm/radeon_drm.h	2008-07-20 17:51:08.000000000 +0200
@@ -442,6 +442,8 @@
  * KW: actually it's illegal to change any of this (backwards compatibility).
  */
 
+#define SYNC_FIELDS
+
 /* Radeon specific ioctls
  * The device specific ioctl range is 0x40 to 0x79.
  */
@@ -473,6 +475,9 @@
 #define DRM_RADEON_SETPARAM   0x19
 #define DRM_RADEON_SURF_ALLOC 0x1a
 #define DRM_RADEON_SURF_FREE  0x1b
+#ifdef SYNC_FIELDS
+#define DRM_RADEON_VSYNC      0x1c
+#endif
 
 #define DRM_IOCTL_RADEON_CP_INIT    DRM_IOW( DRM_COMMAND_BASE + DRM_RADEON_CP_INIT, drm_radeon_init_t)
 #define DRM_IOCTL_RADEON_CP_START   DRM_IO(  DRM_COMMAND_BASE + DRM_RADEON_CP_START)
@@ -501,6 +506,9 @@
 #define DRM_IOCTL_RADEON_SETPARAM   DRM_IOW( DRM_COMMAND_BASE + DRM_RADEON_SETPARAM, drm_radeon_setparam_t)
 #define DRM_IOCTL_RADEON_SURF_ALLOC DRM_IOW( DRM_COMMAND_BASE + DRM_RADEON_SURF_ALLOC, drm_radeon_surface_alloc_t)
 #define DRM_IOCTL_RADEON_SURF_FREE  DRM_IOW( DRM_COMMAND_BASE + DRM_RADEON_SURF_FREE, drm_radeon_surface_free_t)
+#ifdef SYNC_FIELDS
+#define DRM_IOCTL_RADEON_VSYNC      DRM_IOWR(DRM_COMMAND_BASE + DRM_RADEON_VSYNC, drm_radeon_vsync_t)
+#endif
 
 typedef struct drm_radeon_init {
 	enum {
@@ -722,6 +730,19 @@
 	unsigned int address;
 } drm_radeon_surface_free_t;
 
+#ifdef SYNC_FIELDS
+typedef struct drm_radeon_vsync {
+        struct timeval vbl_now;   /* time when this ioctl() has been called */
+        struct timeval vbl_since; /* time since last vertical blank */
+        unsigned vbl_received;    /* continuously counting blanking intervals */
+        unsigned vbl_trim;        /* graphics card frame rate adjust */
+} drm_radeon_vsync_t;
+
+#define VBL_IGNORE     0x80000000
+#define VBL_SLOWER     0x40000000
+
+#endif
+
 #define	DRM_RADEON_VBLANK_CRTC1 	1
 #define	DRM_RADEON_VBLANK_CRTC2 	2
 
diff -ru drivers/char/drm.org/radeon_drv.h drivers/char/drm/radeon_drv.h
--- drivers/char/drm.org/radeon_drv.h	2008-01-24 23:58:37.000000000 +0100
+++ drivers/char/drm/radeon_drv.h	2008-07-20 16:07:50.000000000 +0200
@@ -294,6 +294,13 @@
 	/* starting from here on, data is preserved accross an open */
 	uint32_t flags;		/* see radeon_chip_flags */
 	unsigned long fb_aper_offset;
+
+#ifdef SYNC_FIELDS
+        /* sync fields circuitry */
+        struct timeval vbl_last; /* remember last vblank */
+	u32 trim;
+#endif
+
 } drm_radeon_private_t;
 
 typedef struct drm_radeon_buf_priv {
@@ -419,6 +426,9 @@
 #define RADEON_CLOCK_CNTL_INDEX		0x0008
 #define RADEON_CONFIG_APER_SIZE		0x0108
 #define RADEON_CONFIG_MEMSIZE		0x00f8
+#ifdef SYNC_FIELDS
+#define RADEON_CRTC_H_TOTAL_DISP        0x0200
+#endif
 #define RADEON_CRTC_OFFSET		0x0224
 #define RADEON_CRTC_OFFSET_CNTL		0x0228
 #	define RADEON_CRTC_TILE_EN		(1 << 15)
diff -ru drivers/char/drm.org/radeon_irq.c drivers/char/drm/radeon_irq.c
--- drivers/char/drm.org/radeon_irq.c	2008-01-24 23:58:37.000000000 +0100
+++ drivers/char/drm/radeon_irq.c	2008-07-20 20:08:06.000000000 +0200
@@ -102,6 +102,25 @@
 			    (vblank_crtc & DRM_RADEON_VBLANK_CRTC2)))
 			atomic_inc(&dev->vbl_received);
 
+#ifdef SYNC_FIELDS
+		do_gettimeofday(&dev_priv->vbl_last);
+
+		/*
+		 * if requested we tamper with length of few
+		 * horizontal lines here.
+		 *
+		 * don't try this at home:-)
+		 */
+                if (dev_priv->trim) {
+		    int val = RADEON_READ(RADEON_CRTC_H_TOTAL_DISP);
+		    int ooc = dev_priv->trim & 0xff;
+
+                    udelay(dev_priv->trim >> 16 & 0xff);
+                    RADEON_WRITE(RADEON_CRTC_H_TOTAL_DISP, val + (dev_priv->trim & VBL_SLOWER ? ooc : -ooc));
+                    udelay(dev_priv->trim >> 8 & 0xff);
+                    RADEON_WRITE(RADEON_CRTC_H_TOTAL_DISP, val);
+                }
+#endif
 		DRM_WAKEUP(&dev->vbl_queue);
 		drm_vbl_send_signals(dev);
 	}
diff -ru drivers/char/drm.org/radeon_state.c drivers/char/drm/radeon_state.c
--- drivers/char/drm.org/radeon_state.c	2008-01-24 23:58:37.000000000 +0100
+++ drivers/char/drm/radeon_state.c	2008-07-20 21:20:24.000000000 +0200
@@ -2100,6 +2100,32 @@
 		return 0;
 }
 
+#ifdef SYNC_FIELDS
+static int radeon_vsync(struct drm_device *dev, void *data, struct drm_file *file_priv)
+{
+       drm_radeon_private_t *dev_priv = dev->dev_private;
+       drm_radeon_vsync_t *vsyncp = (drm_radeon_vsync_t *)data;
+
+       if (!(vsyncp->vbl_trim & VBL_IGNORE)) {
+           if (dev_priv->trim != vsyncp->vbl_trim) {
+//               printk(KERN_DEBUG "[drm] changed radeon drift trim from %x -> %x\n", dev_priv->trim, vsyncp->vbl_trim);
+               dev_priv->trim = vsyncp->vbl_trim;
+           }
+       }
+       do_gettimeofday(&vsyncp->vbl_now);
+       if (vsyncp->vbl_now.tv_usec < dev_priv->vbl_last.tv_usec) {
+           vsyncp->vbl_since.tv_sec = vsyncp->vbl_now.tv_sec - dev_priv->vbl_last.tv_sec - 1;
+           vsyncp->vbl_since.tv_usec = vsyncp->vbl_now.tv_usec - dev_priv->vbl_last.tv_usec + 1000000;
+       } else {
+           vsyncp->vbl_since.tv_sec = vsyncp->vbl_now.tv_sec - dev_priv->vbl_last.tv_sec;
+           vsyncp->vbl_since.tv_usec = vsyncp->vbl_now.tv_usec - dev_priv->vbl_last.tv_usec;
+       }
+       vsyncp->vbl_received = atomic_read(&dev->vbl_received);
+       vsyncp->vbl_trim = dev_priv->trim;
+       return 0;
+}      
+#endif
+
 static int radeon_cp_clear(struct drm_device *dev, void *data, struct drm_file *file_priv)
 {
 	drm_radeon_private_t *dev_priv = dev->dev_private;
@@ -3184,7 +3210,10 @@
 	DRM_IOCTL_DEF(DRM_RADEON_IRQ_WAIT, radeon_irq_wait, DRM_AUTH),
 	DRM_IOCTL_DEF(DRM_RADEON_SETPARAM, radeon_cp_setparam, DRM_AUTH),
 	DRM_IOCTL_DEF(DRM_RADEON_SURF_ALLOC, radeon_surface_alloc, DRM_AUTH),
-	DRM_IOCTL_DEF(DRM_RADEON_SURF_FREE, radeon_surface_free, DRM_AUTH)
+	DRM_IOCTL_DEF(DRM_RADEON_SURF_FREE, radeon_surface_free, DRM_AUTH),
+#ifdef SYNC_FIELDS
+	DRM_IOCTL_DEF(DRM_RADEON_VSYNC, radeon_vsync, DRM_AUTH),
+#endif
 };
 
 int radeon_max_ioctl = DRM_ARRAY_SIZE(radeon_ioctls);
sample hardware configuration:

processor: Pentium III (Coppermine) 800MHz
memory: 512MB
SAT-budget card: TT-S1401
graphic: AGP Radeon 9200 SE (RV280)
VGA-to-SCART RGB adapter like this: http://www.sput.nl/hardware/tv-x.html

my current software configuration:

- debian lenny
- kernel 2.6.24-1-686 
- xine-lib 1.1.8
- xineliboutput Version 1.0.0rc2 - yes, it could be a newer one:-)
- xserver-xorg-video-radeon 1:6.9.0-1

important params in xineliboutput setup.conf:

xineliboutput.Decoder.PesBuffers = 500
xineliboutput.DisplayAspect = automatic
xineliboutput.Frontend = sxfe
xineliboutput.Fullscreen = 0
xineliboutput.Modeline = 
xineliboutput.Video.AutoCrop = 0
xineliboutput.Video.Deinterlace = none
xineliboutput.Video.Driver = xv
xineliboutput.Video.FieldOrder = 0
xineliboutput.Video.Overscan = 0
xineliboutput.Video.Port = 0.0
xineliboutput.Video.Scale = 1
xineliboutput.X11.WindowHeight = 575
xineliboutput.X11.WindowWidth = 720

interlaced PAL modeline for xserver:

Modeline "720x576i" 13.875 720 744 808 888 576 580 585 625 -HSync -Vsync interlace
Option      "ForceMinDotClock" "12MHz"
(see other attachment for full xorg.conf description)
Section "ServerLayout"
	Identifier     "X.org Configured"
	Screen      0  "Screen0" 0 0
	InputDevice    "Mouse0" "CorePointer"
	InputDevice    "Keyboard0" "CoreKeyboard"
EndSection

Section "Files"
	RgbPath      "/etc/X11/rgb"
	ModulePath   "/usr/lib/xorg/modules"
	FontPath     "/usr/share/fonts/X11/misc"
	FontPath     "/usr/share/fonts/X11/cyrillic"
	FontPath     "/usr/share/fonts/X11/100dpi/:unscaled"
	FontPath     "/usr/share/fonts/X11/75dpi/:unscaled"
	FontPath     "/usr/share/fonts/X11/Type1"
	FontPath     "/usr/share/fonts/X11/100dpi"
	FontPath     "/usr/share/fonts/X11/75dpi"
	FontPath     "/var/lib/defoma/x-ttcidfont-conf.d/dirs/TrueType"
EndSection

Section "Module"
	Load  "GLcore"
	Load  "record"
	Load  "extmod"
	Load  "dri"
	Load  "xtrap"
	Load  "glx"
	Load  "dbe"
EndSection

Section "InputDevice"
	Identifier  "Keyboard0"
	Driver      "kbd"
EndSection

Section "InputDevice"
	Identifier  "Mouse0"
	Driver      "mouse"
	Option	    "Protocol" "auto"
	Option	    "Device" "/dev/input/mice"
	Option	    "ZAxisMapping" "4 5 6 7"
EndSection

Section "Monitor"
	Identifier   "Monitor0"
	VendorName   "Monitor Vendor"
	ModelName    "Monitor Model"

	Modeline "720x576i"   13.875 720  744  808  888  576  580  585  625 -HSync -Vsync interlace
EndSection

Section "Device"
	Identifier  "Card0"
	Driver      "radeon"
	VendorName  "ATI Technologies Inc"
	BoardName   "Radeon 9100 IGP"
	Option      "ForceMinDotClock" "12MHz"
EndSection

Section "Screen"
	Identifier "Screen0"
	Device     "Card0"
	Monitor    "Monitor0"
	DefaultDepth    24
	SubSection "Display"
		Viewport   0 0
		Depth      24
		Modes      "720x576i" 
	EndSubSection
EndSection
  

Comments

Gavin Hamill July 22, 2008, 6:12 p.m. UTC | #1
On Tue, 2008-07-22 at 18:37 +0200, Thomas Hilber wrote:
> Hi list,
> 
> the last few days I made some interesting experiences with VGA cards I
> now want to share with you.

Wow!

I can't support this project strongly enough - what a perfect idea!

In the inevitable shift towards HDTV and progressive scanning, I was
becoming increasingly concerned that the countless hours of interlaced
content would be forgotten in the scramble for new and shiny.

Indeed, my own VDR FF based system exists in a large desktop PC case
only because of the enormous (and antiquated) FF card. This project
leads the way for replacing it with something much more compact, since
the card runs hot and it's only a matter of time before it dies.

Is it likely to work with any newer Radeons, or is it because the RV2xx
series is the last to have useful full open source drivers? I have a
couple of RV3xxs (Radeon X300 + X600 Pro) I'd love to try this with :)

Re: http://www.sput.nl/hardware/tv-x.html

I've read this page before, and I dearly love the 'Problems' section
which I've reproduced verbatim here:

"Apparently, some hardware doesn't support interlaced mode. If you have
sync problems, check the sync signal with an oscilloscope."  

Yup, because everyone has one lying around ;)

In fact, my biggest problem with this project before now has been the
manufacture of such an adapter - my soldering skills are beyond poor.

I don't suppose you'd be willing to make some VGA -> SCART hobby-boxes
up for a suitable fee? :)

Cheers,
Gavin.
  
Theunis Potgieter July 22, 2008, 6:30 p.m. UTC | #2
wow, when I buy an AMD card I will sure look up your code. :)

currently I'm still using a pentium 4, 2.4GHz machine with nvidia AGP 440MX
card,

only way to get that to work properly was with the older nvidia drivers
71.86.0 , apparently the newer drivers forces PAL or any other TV Standard
to run @60Hz instead of 50Hz, which is what my broadcast is. So I had to
"downgrade" the driver to get the proper output.

With these options in my xorg.conf to disable the driver's auto settings.

Section "Monitor"
    .
    .
    ModeLine "720x576PAL"   27.50   720 744 800 880 576 582 588 625 -hsync
-vsync
    ModeLine "720x576@50i"  14.0625 720 760 800 900 576 582 588 625 -hsync
-vsync interlace
    .
EndSection

Section "Screen"
    .
    .
    Option         "UseEDIDFreqs" "FALSE"
    Option         "UseEDIDDpi" "FALSE"
    Option         "ModeValidation" "NoEdidModes"
    SubSection "Display"
         Modes       "720x576PAL"
    EndSubSection
    .
EndSection

xvidtune reports this on DISPLAY=:0.1
 "720x576"      27.50    720  744  800  880    576  582  588  625 -hsync
-vsync

cpu load is 10% with xineliboutput set to use xvmc, my cpu fan even turns
off, it only kicks in when I view a xvid/divx type movie.

Theunis

2008/7/22 Thomas Hilber <vdr@toh.cx>:

> Hi list,
>
> the last few days I made some interesting experiences with VGA cards I
> now want to share with you.
>
> goal
> ----
>
> develop a budget card based VDR with PAL/RGB output and FF like output
> quality
>
> problem
> -------
>
> as we all know current VGA graphics output quality suffers from certain
> limitations. Graphics cards known so far operate at a fixed frame rate
> not properly synchronized with the stream.
> Thus fields or even frames do often not appear the right time at the ouput.
> Some are doubled others are lost. Finally leading to more or less jerky
> playback.
>
> To a certain degree you can workaround this by software deinterlacing.
> At the cost of worse picture quality when playing interlaced material. Also
> CPU load is considerably increased by that.
>
> It appeared to be a privilege of so called full featured cards (expensive
> cards
> running proprietary firmware) to output true RGB PAL at variable framerate.
> Thus always providing full stream synchronicity.
>
> I've always been bothered by that and finally started to develop a few
> patches
> with the goal in mind to overcome these VGA graphics limitations.
>
> solution
> --------
>
> graphics cards basically are not designed for variable frame rates. Once
> you have setup their timing you are not provided any means like registers
> to
> synchronize the frame rate with external timers. But that's exactly what's
> needed for signal output to stay in sync with the frame rate provided by
> xine-lib or other software decoders.
>
> To extend/reduce the overall time between vertical retrace I first
> dynamically added/removed a few scanlines to the modeline but with bad
> results. By doing so the picture was visibly jumping on the TV set.
>
> After some further experimenting I finally found a solution to fine adjust
> the
> frame rate of my elderly Radeon type card. This time without any bad side
> effects on the screen.
>
> Just trimming the length of a few scanlines during vertical retrace
> period does the trick.
>
> Then I tried to implement the new functionality by applying only minimum
> changes to my current VDR development system. Radeon DRM driver is
> perfectly
> suited for that. I just had to add a few lines of code there.
>
> I finally ended up in a small patch against Radeon DRM driver and a even
> smaller one against xine-lib. The last one also could take place directly
> in the Xserver. Please see attachments for code samples.
>
> When xine-lib calls PutImage() it checks whether to increase/decrease
> Xservers frame rate. This way after a short adaption phase xine-lib can
> place it's PutImage() calls right in the middle between 2 adjacent vertical
> blanking intervals. This provides maximum immunity against jitter. And
> even better: no more frames/fields are lost due to stream and graphics
> card frequency drift.
>
> Because we now cease from any deinterlacing we enjoy discontinuation of
> all its disadvantages:
>
> If driving a device with native interlaced input (e.g. a traditional TV Set
> or modern TFT with good RGB support) we have no deinterlacing artifacts
> anymore.
>
> Since softdecoders now are relieved of any CPU intensive deinterlacing
> we now can build cheap budget card based VDRs with slow CPUs.
>
> Please find attached 2 small patches showing you the basic idea and a
> description of my test environment. The project is far from complete but
> even at this early stage of development shows promising results.
>
> It should give you some rough ideas how to recycle your old hardware to a
> smoothly running budget VDR with high quality RGB video output.
>
> some suggestions what to do next:
> - detection of initial field parity
> - faster initial frame rate synchronisation after starting replay
> - remove some hard coded constants (special dependencies on my system's
> timing)
>
> Some more information about the project is also available here
> http://www.vdr-portal.de/board/thread.php?threadid=78480
>
> Currently it's all based on Radeons but I'll try to also port it to other
> type of VGA cards. There will be some updates in the near future. stay
> tuned.
>
> -Thomas
>
>
> _______________________________________________
> vdr mailing list
> vdr@linuxtv.org
> http://www.linuxtv.org/cgi-bin/mailman/listinfo/vdr
>
>
  
Pasi Kärkkäinen July 22, 2008, 8:51 p.m. UTC | #3
On Tue, Jul 22, 2008 at 06:37:05PM +0200, Thomas Hilber wrote:
> Hi list,
> 
> the last few days I made some interesting experiences with VGA cards I
> now want to share with you.
> 
> goal
> ----
> 
> develop a budget card based VDR with PAL/RGB output and FF like output quality
> 
> problem
> -------
> 
> as we all know current VGA graphics output quality suffers from certain
> limitations. Graphics cards known so far operate at a fixed frame rate 
> not properly synchronized with the stream.
> Thus fields or even frames do often not appear the right time at the ouput. 
> Some are doubled others are lost. Finally leading to more or less jerky 
> playback.
> 

Hi and thanks for starting this project!

I'm a dxr3 user myself, but of course it would be nice to get good output
quality without "extra" hardware! :)

> 
> It appeared to be a privilege of so called full featured cards (expensive cards
> running proprietary firmware) to output true RGB PAL at variable framerate.
> Thus always providing full stream synchronicity.
> 

variable framerate.. I tend to watch interlaced PAL streams (50 hz), PAL
DVD's and NTSC DVD's.. It would be great to get perfect output for all of
these :) 

A bit off topic.. Does any of the video players for Linux switch to a
resolution/modeline with a different refresh rate when watching a movie to
get perfect synchronization and no tearing? 

An example.. your desktop might be at 70 hz refresh rate in normal use (ok,
maybe it's 60 hz nowadays with LCD displays), and when you start watching
PAL TV it would be better to have your display at 50 hz or 100 hz to get
perfect output.. and then, when you start seeing a 24 fps movie, it would be
best to have your display in 72 hz mode (3*24).. etc.

-- Pasi
  
thomas July 22, 2008, 9:17 p.m. UTC | #4
On Tue, Jul 22, 2008 at 07:12:35PM +0100, Gavin Hamill wrote:
> In the inevitable shift towards HDTV and progressive scanning, I was
> becoming increasingly concerned that the countless hours of interlaced
> content would be forgotten in the scramble for new and shiny.

not to forget interlaced formats are still in effect for HDTV. I think
you could recycle the basic idea behind my patch for HDTV as well.

> Indeed, my own VDR FF based system exists in a large desktop PC case
> only because of the enormous (and antiquated) FF card. This project
> leads the way for replacing it with something much more compact, since
> the card runs hot and it's only a matter of time before it dies.

that originally was one of my major motivations. I don't like a huge
VDR box with a FF card in my living room. At least Radeons are also available
in low profile format. So are some budget satellite cards. 

I hope one day we also could support some on-board graphics (like nVidia or
Intel) what would allow tiny VDR boxes with very common hardware.

> Is it likely to work with any newer Radeons, or is it because the RV2xx
> series is the last to have useful full open source drivers? I have a
> couple of RV3xxs (Radeon X300 + X600 Pro) I'd love to try this with :)

the patch from above basically should run with all cards supported by
the xf86-video-ati driver. Just have a look at one of the more recent 
man pages:

http://cgit.freedesktop.org/~agd5f/xf86-video-ati/tree/man/radeon.man?h=vsync_accel

Unfortunately with Radeons we currently have 2 problems unsolved:

1. there appears to be a tiny bug in XV overlay scaling code which
sometimes mixes even and odd fields at certain resolutions. A workaround
to compensate for this is to scale the opposite way. This is done by
xineliboutput option 'Fullscreen mode: no, Window height: 575 px'
instead of 'Window height: 575 px' (as noted in my configuration example).

Overlay XV uses double buffering eliminating any tearing effects. This
works pretty good.

2. the other way to use XV video extension is textured mode. This method
shows very good results. No scaling problems at all. But this code is so 
new (a few weeks), there even does not yet exist proper tearing protection 
for.

So for demonstration purposes I still prefer the overlay instead of textured 
XV adaptor.

> "Apparently, some hardware doesn't support interlaced mode. If you have
> sync problems, check the sync signal with an oscilloscope."  

but we are on the safe side. Radeons do support it:)

> In fact, my biggest problem with this project before now has been the
> manufacture of such an adapter - my soldering skills are beyond poor.

just recycle a conventional VGA monitor cable. So you just have to
fiddle with the SCART side of the cable.

> I don't suppose you'd be willing to make some VGA -> SCART hobby-boxes
> up for a suitable fee? :)

at least this was not my primary intention:)

Cheers,
Thomas
  
thomas July 22, 2008, 9:41 p.m. UTC | #5
On Tue, Jul 22, 2008 at 08:30:46PM +0200, Theunis Potgieter wrote:
> currently I'm still using a pentium 4, 2.4GHz machine with nvidia AGP 440MX
> card,

at least the VGA-to-SCART cable (not yet the patch itself) does run here
on nVidia hardware without problems. Box is a PUNDIT P1-AH2 with nVidia C51PV 
[GeForce 6150] graphics.

> only way to get that to work properly was with the older nvidia drivers
> 71.86.0 , apparently the newer drivers forces PAL or any other TV Standard
> to run @60Hz instead of 50Hz, which is what my broadcast is. So I had to
> "downgrade" the driver to get the proper output.

really? On my Pundit I use NVIDIA-Linux-x86-100.14.19-pkg1.run and
the attached xorg.conf with no problems.

>     Option         "UseEDIDFreqs" "FALSE"
>     Option         "UseEDIDDpi" "FALSE"

I just use one big hammer instead:)

Option "UseEDID" "FALSE"

That works (mostly).

-Thomas
Section "ServerLayout"
    Identifier     "Default Layout"
    Screen         "Default Screen" 0 0
    InputDevice    "Generic Keyboard"
    InputDevice    "Configured Mouse"
    Option         "BlankTime" "0"
    Option         "StandbyTime" "0"
    Option         "SuspendTime" "0"
    Option         "OffTime" "0"
EndSection

Section "Files"
    FontPath        "/usr/share/fonts/X11/misc"
EndSection

Section "Module"
    Load           "i2c"
    Load           "bitmap"
    Load           "ddc"
    Load           "extmod"
    Load           "freetype"
#    Load           "glx"
    Load           "int10"
    Load           "vbe"
EndSection

Section "ServerFlags"
    Option         "AllowMouseOpenFail" "on"
EndSection

Section "InputDevice"
    Identifier     "Generic Keyboard"
    Driver         "kbd"
    Option         "CoreKeyboard"
    Option         "XkbRules" "xorg"
    Option         "XkbModel" "pc104"
    Option         "XkbLayout" "us"
EndSection

Section "InputDevice"
    Identifier     "Configured Mouse"
    Driver         "mouse"
    Option         "CorePointer"
    Option         "Device" "/dev/input/mice"
    Option         "Protocol" "ImPS/2"
    Option         "Emulate3Buttons" "true"
EndSection

Section "Monitor"
        Identifier      "Generic Monitor"
        Option          "DPMS"
        HorizSync 15-16                                                     
        Modeline "720x576i"   13.875 720  744  808  888  576  580  585  625 -HSync -Vsync interlace            # pundit p1 --_---_--
EndSection

Section "Device"
    Option "UseEDID" "FALSE"
    Option "UseEvents" "True"
    Option "NoLogo" "True"
    Identifier     "Generic Video Card"
    Driver         "nvidia"
EndSection

Section "Screen"
    Identifier     "Default Screen"
    Device         "Generic Video Card"
    Monitor        "Generic Monitor"
    DefaultDepth   24

    SubSection     "Display"
	Depth      24
	Modes      "720x576i"           #RGB
    EndSubSection
EndSection
  
Martin Emrich July 22, 2008, 10:12 p.m. UTC | #6
Hi!

First thing: A great idea!

Thomas Hilber schrieb:

> not to forget interlaced formats are still in effect for HDTV. I think
> you could recycle the basic idea behind my patch for HDTV as well.

I have connected my VDR box to my TV via a DVI-to-HDMI cable, set the
resolution to 1920x1080 and let the graphics card do the upscaling
instead of the TV, because the quality looks IMHO better this way. But
here still the same problem is present, the refresh rate of the graphics
card is not bound to the field rate of the incoming TV signal, so I can
either disable sync-to-vblank and have tearing artefacts or enable it
and have an unsteady framerate.

I wonder if your patch could be applied to a DVI/HDMI connection, too?
Its a Radeon X850 currently with xf86-video-ati 6.6.3 and xorg-server 1.4.

Ciao

Martin
  
Theunis Potgieter July 23, 2008, 7:41 a.m. UTC | #7
The xorg.conf options differ for newer versions of the nvidia driver, that
is why mine looks different.

How I picked up on the problem was, when I ran xvidtune on DISPLAY=:0.1
(TV-Out) and found that even when I set the modeline, it still ran @60Hz,
thus showing the tearing effect and had to enable deinterlacer. After
googling for 6 months, I found somebody on a mailing list explaining that
the TV-Out (s-video) could be set to run @50Hz but only with the older
drivers of nvidia and because my card is old it was not a problem. Obviously
this only helps for the TV-Out on nvidia, thus I don't require any
deinterlacers. I use the machine as a home PC on DISPLAY=:0.0. I understand
that your solution helps when using a LCD/Plasma with dvi/d-sub/scart
connectors.

Just wanted to share my experience with all :) I'm only showing that you can
consolidate your hardware too, if implemented correctly. I only have the 1
pc running in the house and don't see a need to run more. I extend the
svideo/audio/IR cable to the next room. Not really needed now since the pc
runs quiet when the cpu fan stops. Only thing making a noise now is the
already relatively "quiet" power supply. Things that start up the cpu fan is
xvid/divx and firefox (on DISPLAY=:0.0). Taking into account that "live" tv
is also off loaded using xvmc.

Theunis

On 22/07/2008, Thomas Hilber <vdr@toh.cx> wrote:
>
> On Tue, Jul 22, 2008 at 08:30:46PM +0200, Theunis Potgieter wrote:
> > currently I'm still using a pentium 4, 2.4GHz machine with nvidia AGP
> 440MX
> > card,
>
>
> at least the VGA-to-SCART cable (not yet the patch itself) does run here
> on nVidia hardware without problems. Box is a PUNDIT P1-AH2 with nVidia
> C51PV
> [GeForce 6150] graphics.
>
>
> > only way to get that to work properly was with the older nvidia drivers
> > 71.86.0 , apparently the newer drivers forces PAL or any other TV
> Standard
> > to run @60Hz instead of 50Hz, which is what my broadcast is. So I had to
> > "downgrade" the driver to get the proper output.
>
>
> really? On my Pundit I use NVIDIA-Linux-x86-100.14.19-pkg1.run and
> the attached xorg.conf with no problems.
>
>
> >     Option         "UseEDIDFreqs" "FALSE"
> >     Option         "UseEDIDDpi" "FALSE"
>
>
> I just use one big hammer instead:)
>
> Option "UseEDID" "FALSE"
>
> That works (mostly).
>
>
> -Thomas
>
>
> _______________________________________________
> vdr mailing list
> vdr@linuxtv.org
> http://www.linuxtv.org/cgi-bin/mailman/listinfo/vdr
>
>
>
  
Torgeir Veimo July 23, 2008, 8:04 a.m. UTC | #8
On 23 Jul 2008, at 02:37, Thomas Hilber wrote:

> To a certain degree you can workaround this by software deinterlacing.

> After some further experimenting I finally found a solution to fine  
> adjust the
> frame rate of my elderly Radeon type card.

> Just trimming the length of a few scanlines during vertical retrace
> period does the trick.
>
> Then I tried to implement the new functionality by applying only  
> minimum
> changes to my current VDR development system. Radeon DRM driver is  
> perfectly
> suited for that.

> I finally ended up in a small patch against Radeon DRM driver and a  
> even
> smaller one against xine-lib.

Your approach is very interesting, I myself have seen the problems  
that clock drift has on judder when using softdevice with vdr.

Have you considered applying your approach to DirectFB? There's a  
radeon driver which is not too hard to change, it also has a kernel  
module which could be augmented by using an ioctl command.

In addition, you might want to try out your approach with a matrox  
G550 card. These have field perfect interlaced output using DirectFB,  
so you'd have that part of the problem solved already.
  
Laz July 23, 2008, 8:20 a.m. UTC | #9
On Tuesday 22 Jul 2008, Thomas Hilber wrote:
> solution
> --------
>
> graphics cards basically are not designed for variable frame rates.
> Once you have setup their timing you are not provided any means like
> registers to synchronize the frame rate with external timers. But
> that's exactly what's needed for signal output to stay in sync with the
> frame rate provided by xine-lib or other software decoders.
>
> To extend/reduce the overall time between vertical retrace I first
> dynamically added/removed a few scanlines to the modeline but with bad
> results. By doing so the picture was visibly jumping on the TV set.

{snippage}

Interesting...

I looked at this sort of thing a few years back and came to the conclusion 
that the only cards that could be convinced to sync at such low rates, 
i.e. 50 Hz for PAL, were the Matrox G400, G450, etc. Whenever I tried 
setting modelines with any other cards, I never got any output or an 
error when starting X.

I take it that more modern cards are a lot more flexible in this respect!

I'm currently using a G450 with softdevice connected to a CRT TV and it 
works pretty well most of the time with the odd flicker due to dodgy sync 
every now and than.

Using hardware to do the deinterlacing is _definitely_ the way forward, 
especially for CRT. (Not sure whether LCDs display an interlaced 
streame "properly" or whether they try to interpolate somehow and refresh 
the whole screen at once. I'm not buying one until 1. terrestrial is 
available in the UK, 2. my current TV dies, 3. there is a solution like 
this which utilises older hardware!).

Looks interesting...

Cheers,

Laz
  
Paul Menzel July 23, 2008, 10:10 a.m. UTC | #10
Dear Theunis,


Am Mittwoch, den 23.07.2008, 09:41 +0200 schrieb Theunis Potgieter:

> Only thing making a noise now is the already relatively "quiet" power
> supply.

You probably already know this, but anyway.

There are power supplies called picoPSU which have no fans and are, as
far as I understand, more efficient. See [1] for example. Additionally
you need an AC/DC adapter which gets, I think, pretty hot but it is
placed outside of the case. Also it is a little more expensive than a
“normal” PSU but you probably also save some money because of the better
efficiency.


Regards,

Paul


[1] http://www.bigbruin.com/reviews05/review.php?item=picopsu&file=1
  
thomas July 23, 2008, 12:16 p.m. UTC | #11
On Wed, Jul 23, 2008 at 12:12:46AM +0200, Martin Emrich wrote:
> I have connected my VDR box to my TV via a DVI-to-HDMI cable, set the
> resolution to 1920x1080 and let the graphics card do the upscaling
> instead of the TV, because the quality looks IMHO better this way. But

ok. But if doing so you still have to continue deinterlacing in
software. This is because any scaling in Y dimension intermixes even/odd
fields in the frame buffer. Finally producing a totally messed VGA output 
signal.

Scaling in X dimension however is always allowed. E.g. for switching
between 4:3/16:9 formats on a 16:9 TV-set.

> here still the same problem is present, the refresh rate of the graphics
> card is not bound to the field rate of the incoming TV signal, so I can
> either disable sync-to-vblank and have tearing artefacts or enable it
> and have an unsteady framerate.

right. Even if you still must use software deinterlacing for some reason
you benefit from the 'sync_fields' patch. You then can enable
sync-to-vblank and the patch dynamically synchronizes graphics card's vblanks
and TV signal's field updates. Thus avoiding unsteady frame rates at
VGA/DVI/HDMI output.

> I wonder if your patch could be applied to a DVI/HDMI connection, too?
> Its a Radeon X850 currently with xf86-video-ati 6.6.3 and xorg-server 1.4.

In your case the only prerequisite is support of your Radeon X850 by Radeon 
DRM driver. DRM normally is shipped with kernel. So this is a kernel/driver
issue. But I don't expect problems here though I not yet testet the 
X850 myself (yet).

-Thomas
  
thomas July 23, 2008, 12:50 p.m. UTC | #12
On Wed, Jul 23, 2008 at 09:41:54AM +0200, Theunis Potgieter wrote:
> deinterlacers. I use the machine as a home PC on DISPLAY=:0.0. I understand
> that your solution helps when using a LCD/Plasma with dvi/d-sub/scart
> connectors.

right. It also helps for deinterlaced output using a LCD/Plasma with
dvi/d-sub. 

But it should prove even more useful doing interlaced output
with scart on a LCD/Plasma or cathode ray tube based displays.

Though this is a quite challenging task. All components e.g. budget card
drivers, software decoder, display driver (e.g. Xserver) must play seamlessly
together.

One badly behaving component anywhere in the chain can ruin the overall 
effort.

This is a big difference to a FF card. Where almost all important
components are located within a self-contained board. Driven by proprietary
firmware:)
  
Martin Emrich July 23, 2008, 1:06 p.m. UTC | #13
Hi!

Thomas Hilber schrieb:
> On Wed, Jul 23, 2008 at 12:12:46AM +0200, Martin Emrich wrote:
>> I have connected my VDR box to my TV via a DVI-to-HDMI cable, set the
>> resolution to 1920x1080 and let the graphics card do the upscaling
>> instead of the TV, because the quality looks IMHO better this way. But
> 
> ok. But if doing so you still have to continue deinterlacing in
> software. This is because any scaling in Y dimension intermixes even/odd
> fields in the frame buffer. Finally producing a totally messed VGA output 
> signal.

Of course. As I also use other applications on the box (mplayer, photo
viewing), neither reducing the resolution nor enabling interlacing
(1080i) is desired.

Software deinterlacing is no problem, from time to time I experiment
with all the interlacer options. (I wonder why there's no simple "TV
simulator" that upmixes 50 fields/s to 50 frames/s just like a CRT TV?).

> right. Even if you still must use software deinterlacing for some reason
> you benefit from the 'sync_fields' patch. You then can enable
> sync-to-vblank and the patch dynamically synchronizes graphics card's vblanks
> and TV signal's field updates. Thus avoiding unsteady frame rates at
> VGA/DVI/HDMI output.

Ok. I'm really busy currently (but your project looked so cool that I
just *had* to write an email to the list), but as soon as I get to it,
I'll try to make it work.

Does anyone have a 1080p@50Hz modeline ready? Currently, I use the
settings provided by the TV via EDID, and I guess it defaults to 60Hz :(

>> I wonder if your patch could be applied to a DVI/HDMI connection, too?
>> Its a Radeon X850 currently with xf86-video-ati 6.6.3 and xorg-server 1.4.
> 
> In your case the only prerequisite is support of your Radeon X850 by Radeon 
> DRM driver. DRM normally is shipped with kernel. So this is a kernel/driver
> issue. But I don't expect problems here though I not yet testet the 
> X850 myself (yet).

As the box runs a home-built netboot mini distro, I am quite flexible
regarding kernel versions. As soon as I have some spare time (probably
after I finished my BA thesis :(, I get to it...

Ciao

Martin
  
thomas July 23, 2008, 1:09 p.m. UTC | #14
On Tue, Jul 22, 2008 at 11:51:13PM +0300, Pasi Kärkkäinen wrote:
> A bit off topic.. Does any of the video players for Linux switch to a
> resolution/modeline with a different refresh rate when watching a movie to
> get perfect synchronization and no tearing? 

some time ago I accidentally stumbled across these options in my outdated
xineliboutput version:

xineliboutput.VideoModeSwitching = 1
xineliboutput.Modeline =

maybe these are intended for this purpose? I didn't care yet.

> An example.. your desktop might be at 70 hz refresh rate in normal use (ok,
> maybe it's 60 hz nowadays with LCD displays), and when you start watching
> PAL TV it would be better to have your display at 50 hz or 100 hz to get
> perfect output.. and then, when you start seeing a 24 fps movie, it would be
> best to have your display in 72 hz mode (3*24).. etc.

http://en.wikipedia.org/wiki/XRandR is what you are looking for. At
least when talking about Xservers with that capability. Don't know how 
well it's supported by today's VDR output plugins.
  
Torgeir Veimo July 23, 2008, 1:20 p.m. UTC | #15
On 23 Jul 2008, at 23:06, Martin Emrich wrote:

>  (I wonder why there's no simple "TV simulator" that upmixes 50  
> fields/s to 50 frames/s just like a CRT TV?).


It's very hard to simulate this 'upmix'. A CRT TV actually moves the  
electron beam across the screen and the phosphor has some time it  
stays illuminated after being hit by the beam. This is very hard to  
simulate with a digital screen which is either on or off, or has some  
slowness by itself which is different from how a CRT screen works.

The dscaler project has implemented some of the best deinterlacing  
algorithms and most of the tvtime algorithms are implemented (to my  
knowledge) with basis in dscaler source / ideas. See http:// 
dscaler.org/ . dscaler basically is a deinterlace and display program  
that takes input from bt8x8 based capture cards.

Someone on that project had an idea to create a setup where the  
display hardware was synced to the input clock of the capture card,  
but I'm not sure if anything ever came out of that idea.
  
Pasi Kärkkäinen July 23, 2008, 1:49 p.m. UTC | #16
On Wed, Jul 23, 2008 at 03:09:29PM +0200, Thomas Hilber wrote:
> On Tue, Jul 22, 2008 at 11:51:13PM +0300, Pasi Kärkkäinen wrote:
> > A bit off topic.. Does any of the video players for Linux switch to a
> > resolution/modeline with a different refresh rate when watching a movie to
> > get perfect synchronization and no tearing? 
> 
> some time ago I accidentally stumbled across these options in my outdated
> xineliboutput version:
> 
> xineliboutput.VideoModeSwitching = 1
> xineliboutput.Modeline =
> 
> maybe these are intended for this purpose? I didn't care yet.
>

Ok. Sounds like it.. 

although "xineliboutput.Modeline" sounds a bit like it only wants to change to 
one specific modeline.. 
 
> > An example.. your desktop might be at 70 hz refresh rate in normal use (ok,
> > maybe it's 60 hz nowadays with LCD displays), and when you start watching
> > PAL TV it would be better to have your display at 50 hz or 100 hz to get
> > perfect output.. and then, when you start seeing a 24 fps movie, it would be
> > best to have your display in 72 hz mode (3*24).. etc.
> 
> http://en.wikipedia.org/wiki/XRandR is what you are looking for. At
> least when talking about Xservers with that capability. Don't know how 
> well it's supported by today's VDR output plugins.
> 

Thanks. 

I knew new xservers are able to change resolution/modeline on the fly
nowadays.. but didn't remember it was XRandR extension doing it :) 

-- Pasi
  
Pasi Kärkkäinen July 23, 2008, 2:05 p.m. UTC | #17
On Tue, Jul 22, 2008 at 06:37:05PM +0200, Thomas Hilber wrote:
> 
> It appeared to be a privilege of so called full featured cards (expensive cards
> running proprietary firmware) to output true RGB PAL at variable framerate.
> Thus always providing full stream synchronicity.
>

I assume RGB NTSC should work as well.. ?

I live in Europe so PAL is the thing for me, but sometimes you have video in
NTSC too..

> 
> After some further experimenting I finally found a solution to fine adjust the
> frame rate of my elderly Radeon type card. This time without any bad side 
> effects on the screen.
> 
> Just trimming the length of a few scanlines during vertical retrace
> period does the trick.
>

<snip>

> 
> When xine-lib calls PutImage() it checks whether to increase/decrease
> Xservers frame rate. This way after a short adaption phase xine-lib can
> place it's PutImage() calls right in the middle between 2 adjacent vertical
> blanking intervals. This provides maximum immunity against jitter. And
> even better: no more frames/fields are lost due to stream and graphics
> card frequency drift.
> 

Hmm.. can you explain what "increase/decrease Xservers frame rate" means? 

I don't really know how xserver or display drivers work nowadays, but back
in the days when I was coding graphics stuff in plain assembly (in MSDOS) I
always did this to get perfect synchronized output without any tearing: 

1. Render frame to a (double) buffer in memory
2. Wait for vertical retrace to begin (beam moving from bottom of the screen to top)
3. Copy the double buffer to display adapter framebuffer
4. Goto 1

So the video adapter framebuffer was always filled with a full new frame right
before it was visible to the monitor..

This way you always got full framerate, smooth video, no tearing.. as long
as your rendering took less than duration of a single frame :) 

So I guess the question is can't you do the same nowadays.. 
lock the PutImage() to vsync? 

-- Pasi
  
thomas July 23, 2008, 3:53 p.m. UTC | #18
Hi,

On Wed, Jul 23, 2008 at 06:04:29PM +1000, Torgeir Veimo wrote:
> Your approach is very interesting, I myself have seen the problems  
> that clock drift has on judder when using softdevice with vdr.

yes, that's also my experience with certain xineliboutput -
xine-lib version combinations. I also experienced that certain DVB 
drivers sporadically stall the system for inadmissible long period of time. 

My current configuration however outputs fields *very* regularly at
least when doing playback. That's why I currently don't want to update 
any of these components.

Issues with judder are not so noticeable under more common operating
conditions. Maybe that's why developers of softdecoder components are 
not always aware of problems in this area.

But with deinterlacing deactivated irregularities are mercilessly exposed.
Because after each dropped field subsequent fields are replayed swapped:)

A measurement protocol showing you how regularly frames drip in with my
current configuration can be found here

http://www.vdr-portal.de/board/attachment.php?attachmentid=19208

attached to that post:

http://www.vdr-portal.de/board/thread.php?postid=737687#post737687

legend:
vbl_received - count of VBLANK interrupts since Xserver start
vbl_since - time in usecs since last VBLANK interrupt
vbl_now - time (only usec part) when ioctl has been called
vbl_trim - trim value currently configured

some explanations:
vbl_received is incremented by two each line because 2 VBLANK interrupts
(== fields) are received each frame.

vbl_since is constantly incremented by drift between VBLANK 
timing based clock and xine-lib's call to PutImage() (effectively 
stream timestamps).
After reaching a programmed level of about vbl_since == 11763 (for this 
particular example) vbl_trim starts to oscillate between the two 
values 0 and 200 (only a sample).
Representing the two active Xserver modelines. This is only for simplicity.
We could also program a much finer grading if desired. We are not limited
to 2 modelines.

when vbl_trim starts to oscillate Xserver's video timing is fully
synchronized with the stream.

interesting is minimal judder of vbl_now. It's incremented very
regularly by a value very close to 40000usec each call.

BTW:
The measurement took place in the Xserver (at the place where double
buffers are switched) not at the patch in xine-lib.
Thus comprising all latencies in the data path between xine-lib and
Xserver. And even though there are effectively no stray values.

I can look a football recording for about 20 minutes (my test material)
without any field loss.

> Have you considered applying your approach to DirectFB? There's a  
> radeon driver which is not too hard to change, it also has a kernel  
> module which could be augmented by using an ioctl command.

not yet but it sounds very interesting! Unfortunately I'm not on holiday
and can't spend too much time for this project. Though I dedicate each
free minute to it:)

> In addition, you might want to try out your approach with a matrox  
> G550 card. These have field perfect interlaced output using DirectFB,  
> so you'd have that part of the problem solved already.

right, a very good idea! You mean AGP G550? I almost forgot there
are laying 2 of these boards somewhere around here. 

Cheers,
  Thomas
  
thomas July 23, 2008, 6:43 p.m. UTC | #19
On Wed, Jul 23, 2008 at 09:20:08AM +0100, Laz wrote:
> that the only cards that could be convinced to sync at such low rates, 
> i.e. 50 Hz for PAL, were the Matrox G400, G450, etc. Whenever I tried 
> setting modelines with any other cards, I never got any output or an 
> error when starting X.

also Radeons need a special 'invitation' for this by specifying:

Option      "ForceMinDotClock" "12MHz"

> I take it that more modern cards are a lot more flexible in this respect!

maybe. nVidia cards I tested so far work without special problems. But I've
heard that only the closed source driver as capable of 50 Hz for PAL. I did't
test myself the open source driver with that low frequency.

I hope one day the Nouveau project could give us enough support for PAL 
on nVidia with adequate drivers.

> I'm currently using a G450 with softdevice connected to a CRT TV and it 
> works pretty well most of the time with the odd flicker due to dodgy sync 
> every now and than.

maybe you can convince the softdevice developers to give the patch a
try:)

> Using hardware to do the deinterlacing is _definitely_ the way forward, 

for displaying interlaced content it's always the best to use a display
with native interlace capabilities. So nobody has to deinterlace 
at all. You just route the plain fields straight through to the hardware.

> especially for CRT. (Not sure whether LCDs display an interlaced 
> streame "properly" or whether they try to interpolate somehow and refresh 

I've heard modern LCDs do a pretty good job interpreting a conventional
PAL signal. At the expense of huge amount of signal processing circuitry.
  
thomas July 23, 2008, 7:21 p.m. UTC | #20
On Wed, Jul 23, 2008 at 05:05:21PM +0300, Pasi Kärkkäinen wrote:
> I assume RGB NTSC should work as well.. ?

basically yes. The devil is in the details:) Just give it a try.

> > When xine-lib calls PutImage() it checks whether to increase/decrease
> > Xservers frame rate. This way after a short adaption phase xine-lib can
> > place it's PutImage() calls right in the middle between 2 adjacent vertical
> > blanking intervals. This provides maximum immunity against jitter. And
> > even better: no more frames/fields are lost due to stream and graphics
> > card frequency drift.
> > 
> 
> Hmm.. can you explain what "increase/decrease Xservers frame rate" means? 

you simply adjust the time between two vertical blanking (retrace) intervals
to your needs.

This is done by lengthening/shortening scan lines that are not visible
on the screen. Because they are hidden within the vertical blanking
interval.

> I don't really know how xserver or display drivers work nowadays, but back
> in the days when I was coding graphics stuff in plain assembly (in MSDOS) I
> always did this to get perfect synchronized output without any tearing: 
> 
> 1. Render frame to a (double) buffer in memory
> 2. Wait for vertical retrace to begin (beam moving from bottom of the screen to top)
> 3. Copy the double buffer to display adapter framebuffer
> 4. Goto 1

that's very similar to the way a Radeon handles this when overlay
method is choosen for XV extension:

1. the Xserver writes the incoming frame to one of its 2 buffers. Strictly 
alternating between the two.

2. the CRT controller sequentially reads the even than the odd (or the
other way round dependend on the start condition) field out of the buffer.
And then switches to the next buffer. Also strictly alternating between the 
two.

You just have to take care that data is written the right sequence to the
double buffers. So it is always read the correct sequence by the CRT 
controller.

> So the video adapter framebuffer was always filled with a full new frame right
> before it was visible to the monitor..

the same here. Otherwise the CRT controller would reuse already shown
data.

> So I guess the question is can't you do the same nowadays.. 
> lock the PutImage() to vsync? 

exactly. The patch tries hard to do this:) But to put it in your words:
It's only a 'soft' lock. Loading the machine too much can cause problems.

-Thomas
  
Martin Emrich July 23, 2008, 7:41 p.m. UTC | #21
Hi!

Torgeir Veimo schrieb:
> On 23 Jul 2008, at 23:06, Martin Emrich wrote:
> 
>>  (I wonder why there's no simple "TV simulator" that upmixes 50  
>> fields/s to 50 frames/s just like a CRT TV?).
> 
> 
> It's very hard to simulate this 'upmix'. A CRT TV actually moves the  
> electron beam across the screen and the phosphor has some time it  
> stays illuminated after being hit by the beam. This is very hard to  
> simulate with a digital screen which is either on or off, or has some  
> slowness by itself which is different from how a CRT screen works.

I did not mean to actually simulate the brightness decay in the
phosphors, just the points in time where the fields are presented.

Let's assume we have two frames to be played back, which each consists
of two fields:  {1,2} and {3,4}.

I don't know if it actually works this way, but as far as I know,
playing back interlaced content with 25 frames/s on a progressive
display looks this way:

11111                          33333
22222  ...1/25th sec. later:   44444
11111                          33333
22222                          44444

As field 3 is a 1/50th second "older" than field 4, there are jaggies in
moving scenes.

What I am looking for would be this, with 50 frames/s:

11111             11111        33333      33333
.....  1/50th s.  22222  1/50s 22222      44444
11111             11111        33333      33333
.....             22222        22222      44444

So each field ist still being shown for a 1/25th of a second, but for
the "right" 1/25th second. The output then no longer serves 25fps but
50fps to XVideo, DirectFB or whatever.

All of this of course makes only sense for PAL content when the TV can
do 50Hz, not 60Hz.

> The dscaler project has implemented some of the best deinterlacing  
> algorithms and most of the tvtime algorithms are implemented (to my  
> knowledge) with basis in dscaler source / ideas. See http:// 
> dscaler.org/ . dscaler basically is a deinterlace and display program  
> that takes input from bt8x8 based capture cards.

I assume these are the "tvtime" deinterlacers in the libxineoutput
plugin. I played around with them, but none of them resultet in a
picture as sharp and contrasty as without any deinterlacer. So I have to
choose between sharpness and clean motions. And even during the EURO
2008, I chose the first.

> Someone on that project had an idea to create a setup where the  
> display hardware was synced to the input clock of the capture card,  
> but I'm not sure if anything ever came out of that idea.

I also thought of that. One then would have to sync to the soundcard's
buffer, too, and remove/duplicate samples as necessary, to keep the
audio synchronized.

BTW: How does libxineoutput synchronize? I noticed a slight AV desync
growing over ca. 5 minutes, the the audio jumps once and the desync
jumps into the right position (Digital output to AV receiver).

Ciao

Martin
  
Theunis Potgieter July 23, 2008, 8:38 p.m. UTC | #22
I notice a AV desync after 5 minutes, it definitely happens when it reaches
an advertisement that was cut out, or when I jump to a advertisement. :(

the only way I could "fix" it was to re-encode the edited recording with
'mencoder -ovc copy -oac copy -of mpeg -mpegopts format=pes2 -o new/001.vdr
old/001.vdr'

I would see frames being skipped when it reaches where the cut took place...

I'm using vdr 1.6.0_p1

Theunis

On 23/07/2008, Martin Emrich <emme@emmes-world.de> wrote:
>
> Hi!
>
> Torgeir Veimo schrieb:
> > On 23 Jul 2008, at 23:06, Martin Emrich wrote:
> >
> >>  (I wonder why there's no simple "TV simulator" that upmixes 50
> >> fields/s to 50 frames/s just like a CRT TV?).
> >
> >
> > It's very hard to simulate this 'upmix'. A CRT TV actually moves the
> > electron beam across the screen and the phosphor has some time it
> > stays illuminated after being hit by the beam. This is very hard to
> > simulate with a digital screen which is either on or off, or has some
> > slowness by itself which is different from how a CRT screen works.
>
> I did not mean to actually simulate the brightness decay in the
> phosphors, just the points in time where the fields are presented.
>
> Let's assume we have two frames to be played back, which each consists
> of two fields:  {1,2} and {3,4}.
>
> I don't know if it actually works this way, but as far as I know,
> playing back interlaced content with 25 frames/s on a progressive
> display looks this way:
>
> 11111                          33333
> 22222  ...1/25th sec. later:   44444
> 11111                          33333
> 22222                          44444
>
> As field 3 is a 1/50th second "older" than field 4, there are jaggies in
> moving scenes.
>
> What I am looking for would be this, with 50 frames/s:
>
> 11111             11111        33333      33333
> .....  1/50th s.  22222  1/50s 22222      44444
> 11111             11111        33333      33333
> .....             22222        22222      44444
>
> So each field ist still being shown for a 1/25th of a second, but for
> the "right" 1/25th second. The output then no longer serves 25fps but
> 50fps to XVideo, DirectFB or whatever.
>
> All of this of course makes only sense for PAL content when the TV can
> do 50Hz, not 60Hz.
>
> > The dscaler project has implemented some of the best deinterlacing
> > algorithms and most of the tvtime algorithms are implemented (to my
> > knowledge) with basis in dscaler source / ideas. See http://
> > dscaler.org/ . dscaler basically is a deinterlace and display program
> > that takes input from bt8x8 based capture cards.
>
> I assume these are the "tvtime" deinterlacers in the libxineoutput
> plugin. I played around with them, but none of them resultet in a
> picture as sharp and contrasty as without any deinterlacer. So I have to
> choose between sharpness and clean motions. And even during the EURO
> 2008, I chose the first.
>
> > Someone on that project had an idea to create a setup where the
> > display hardware was synced to the input clock of the capture card,
> > but I'm not sure if anything ever came out of that idea.
>
> I also thought of that. One then would have to sync to the soundcard's
> buffer, too, and remove/duplicate samples as necessary, to keep the
> audio synchronized.
>
> BTW: How does libxineoutput synchronize? I noticed a slight AV desync
> growing over ca. 5 minutes, the the audio jumps once and the desync
> jumps into the right position (Digital output to AV receiver).
>
> Ciao
>
> Martin
>
> _______________________________________________
> vdr mailing list
> vdr@linuxtv.org
> http://www.linuxtv.org/cgi-bin/mailman/listinfo/vdr
>
  
Pasi Kärkkäinen July 24, 2008, 10:49 a.m. UTC | #23
On Wed, Jul 23, 2008 at 09:21:01PM +0200, Thomas Hilber wrote:
> On Wed, Jul 23, 2008 at 05:05:21PM +0300, Pasi Kärkkäinen wrote:
> > I assume RGB NTSC should work as well.. ?
> 
> basically yes. The devil is in the details:) Just give it a try.
> 
> > > When xine-lib calls PutImage() it checks whether to increase/decrease
> > > Xservers frame rate. This way after a short adaption phase xine-lib can
> > > place it's PutImage() calls right in the middle between 2 adjacent vertical
> > > blanking intervals. This provides maximum immunity against jitter. And
> > > even better: no more frames/fields are lost due to stream and graphics
> > > card frequency drift.
> > > 
> > 
> > Hmm.. can you explain what "increase/decrease Xservers frame rate" means? 
> 
> you simply adjust the time between two vertical blanking (retrace) intervals
> to your needs.
> 
> This is done by lengthening/shortening scan lines that are not visible
> on the screen. Because they are hidden within the vertical blanking
> interval.
>

Hmm.. I still don't understand why you need to do this in the first place? 
 
> > I don't really know how xserver or display drivers work nowadays, but back
> > in the days when I was coding graphics stuff in plain assembly (in MSDOS) I
> > always did this to get perfect synchronized output without any tearing: 
> > 
> > 1. Render frame to a (double) buffer in memory
> > 2. Wait for vertical retrace to begin (beam moving from bottom of the screen to top)
> > 3. Copy the double buffer to display adapter framebuffer
> > 4. Goto 1
> 
> that's very similar to the way a Radeon handles this when overlay
> method is choosen for XV extension:
> 
> 1. the Xserver writes the incoming frame to one of its 2 buffers. Strictly 
> alternating between the two.
> 
> 2. the CRT controller sequentially reads the even than the odd (or the
> other way round dependend on the start condition) field out of the buffer.
> And then switches to the next buffer. Also strictly alternating between the 
> two.
> 
> You just have to take care that data is written the right sequence to the
> double buffers. So it is always read the correct sequence by the CRT 
> controller.
> 

Ok.

> > So the video adapter framebuffer was always filled with a full new frame right
> > before it was visible to the monitor..
> 
> the same here. Otherwise the CRT controller would reuse already shown
> data.
> 
> > So I guess the question is can't you do the same nowadays.. 
> > lock the PutImage() to vsync? 
> 
> exactly. The patch tries hard to do this:) But to put it in your words:
> It's only a 'soft' lock. Loading the machine too much can cause problems.
> 

Does this mean XV extension (or X itself) does not provide a way to 
"wait for retrace" out-of-the-box.. and your patch adds that functionality? 

Sorry for the stupid questions :) 

-- Pasi
  
Torgeir Veimo July 24, 2008, 11:02 a.m. UTC | #24
On 24 Jul 2008, at 20:49, Pasi Kärkkäinen wrote:

>>> Hmm.. can you explain what "increase/decrease Xservers frame rate"  
>>> means?
>>
>> you simply adjust the time between two vertical blanking (retrace)  
>> intervals
>> to your needs.

> Hmm.. I still don't understand why you need to do this in the first  
> place?


It is to avoid the output framerate drifting away from the DVB-T/S/C  
input framerate.
  
Pasi Kärkkäinen July 24, 2008, 11:29 a.m. UTC | #25
On Thu, Jul 24, 2008 at 09:02:50PM +1000, Torgeir Veimo wrote:
> 
> On 24 Jul 2008, at 20:49, Pasi Kärkkäinen wrote:
> 
> >>> Hmm.. can you explain what "increase/decrease Xservers frame rate"  
> >>> means?
> >>
> >> you simply adjust the time between two vertical blanking (retrace)  
> >> intervals
> >> to your needs.
> 
> > Hmm.. I still don't understand why you need to do this in the first  
> > place?
> 
> 
> It is to avoid the output framerate drifting away from the DVB-T/S/C  
> input framerate.
> 

Oh, now I got it.. and it makes sense :) You can't really control the DVB
stream you receive so you need to sync the output.

There shouldn't be this kind of problems with streams from local files (DVD
for example)..

-- Pasi
  
thomas July 24, 2008, 11:31 a.m. UTC | #26
On Thu, Jul 24, 2008 at 09:02:50PM +1000, Torgeir Veimo wrote:
> > Hmm.. I still don't understand why you need to do this in the first  
> > place?
> 
> 
> It is to avoid the output framerate drifting away from the DVB-T/S/C  
> input framerate.

right. Normally Xserver modelines can only produce 'discrete' and
'static' video timings somewhere near 50Hz. For example 50.01Hz 
if you are lucky.

With the patch you can dynamically fine tune the frame rate by about
0.000030Hz steps what should be enough for our purpose:)

I actually measured this step size by a quickly hacked measurement
tool (drift_monitor) which can be found here (field_sync_tools.tgz):

http://www.vdr-portal.de/board/thread.php?postid=739784#post739784
  
thomas July 24, 2008, 11:36 a.m. UTC | #27
On Thu, Jul 24, 2008 at 01:49:18PM +0300, Pasi Kärkkäinen wrote:
> Does this mean XV extension (or X itself) does not provide a way to 
> "wait for retrace" out-of-the-box.. and your patch adds that functionality? 

most implementations provide "wait for retrace" out-of-the-box. 

The point is that the Xserver's framerate must be adjustable dynamically 
for our purpose. In very small degrees to avoid visible effects.

That's what the patch does.
  
Pasi Kärkkäinen July 24, 2008, 11:41 a.m. UTC | #28
On Thu, Jul 24, 2008 at 01:36:41PM +0200, Thomas Hilber wrote:
> On Thu, Jul 24, 2008 at 01:49:18PM +0300, Pasi Kärkkäinen wrote:
> > Does this mean XV extension (or X itself) does not provide a way to 
> > "wait for retrace" out-of-the-box.. and your patch adds that functionality? 
> 
> most implementations provide "wait for retrace" out-of-the-box. 
> 
> The point is that the Xserver's framerate must be adjustable dynamically 
> for our purpose. In very small degrees to avoid visible effects.
> 
> That's what the patch does.
> 

Yep, I got it now.. after a while :) 

Now I just need to find someone to fix me a VGA-SCART cable and I can test
these things myself.. 

Or maybe I could test with a VGA/CRT monitor to begin with..

I think my LCD only provides a single/fixed refresh rate through DVI-D.. (?)

-- Pasi
  
Pasi Kärkkäinen July 24, 2008, 11:52 a.m. UTC | #29
On Thu, Jul 24, 2008 at 02:29:15PM +0300, Pasi Kärkkäinen wrote:
> On Thu, Jul 24, 2008 at 09:02:50PM +1000, Torgeir Veimo wrote:
> > 
> > On 24 Jul 2008, at 20:49, Pasi Kärkkäinen wrote:
> > 
> > >>> Hmm.. can you explain what "increase/decrease Xservers frame rate"  
> > >>> means?
> > >>
> > >> you simply adjust the time between two vertical blanking (retrace)  
> > >> intervals
> > >> to your needs.
> > 
> > > Hmm.. I still don't understand why you need to do this in the first  
> > > place?
> > 
> > 
> > It is to avoid the output framerate drifting away from the DVB-T/S/C  
> > input framerate.
> > 
> 
> Oh, now I got it.. and it makes sense :) You can't really control the DVB
> stream you receive so you need to sync the output.
> 
> There shouldn't be this kind of problems with streams from local files (DVD
> for example)..
> 

Or maybe there is after all.. it seems the output refresh rate is not
exactly 50.00 Hz, but something close to it.. so that's causing problems.

Thanks:)

-- Pasi
  
Morfsta July 25, 2008, 11:07 a.m. UTC | #30
On Wed, Jul 23, 2008 at 9:38 PM, Theunis Potgieter
<theunis.potgieter@gmail.com> wrote:
> I notice a AV desync after 5 minutes, it definitely happens when it reaches
> an advertisement that was cut out, or when I jump to a advertisement. :(
>

I can confirm this also occurs with the latest version of vdr-xine.

Have you tried changing the audio.synchronization.av_sync_method in
xine-config to be resample instead of metranom?

I think I found this to be better: -

# method to sync audio and video
# { metronom feedback  resample }, default: 0
audio.synchronization.av_sync_method:resample

# always resample to this rate (0 to disable)
# numeric, default: 0
audio.synchronization.force_rate:48000

Cheers
  
Martin Emrich July 25, 2008, 10:37 p.m. UTC | #31
Hi!

Morfsta schrieb:

> Have you tried changing the audio.synchronization.av_sync_method in
> xine-config to be resample instead of metranom?

Hmm, I didn't know there was such an option... I especially have these
problems with AC3 audio, and these cannot easily be resampled.

I have a cheap ASRock board in my media PC, and an SB Live 5.1 value
sound card that gave me quite some problems to set it up correctly, one
of these two is probably responsible for the problems.

In the last media PC before this one, I had an Asus board with onboard
digital out, and with this I didn't have that much problems (but the old
nVidia AGP card was not capable to run at 1920x1080 smoothly).

Ciao

Martin
  
Pasi Kärkkäinen July 27, 2008, 12:53 p.m. UTC | #32
On Tue, Jul 22, 2008 at 06:37:05PM +0200, Thomas Hilber wrote:
> 
> goal
> ----
> 
> develop a budget card based VDR with PAL/RGB output and FF like output quality
> 
> VGA-to-SCART RGB adapter like this: http://www.sput.nl/hardware/tv-x.html
> 

Hi again!

One more question..

Is it possible to output WSS signal from a VGA card to switch between 4:3
and 16:9 modes? 

http://www.intersil.com/data/an/an9716.pdf

-- Pasi
  
thomas July 28, 2008, 1:44 a.m. UTC | #33
On Sun, Jul 27, 2008 at 03:53:01PM +0300, Pasi Kärkkäinen wrote:
> Is it possible to output WSS signal from a VGA card to switch between 4:3
> and 16:9 modes? 
> 
> http://www.intersil.com/data/an/an9716.pdf

it should be possible to emulate WSS by white dots/lines in a
specific scanline. But I did not try this myself yet.

BTW:
I'm currently experimenting with DirectFB instead of Xorg. Their Radeon
driver code does not show the 'Xorg overlay scaling bug' from above.
So I could either stick to DirectFB for the moment. Or I could try with help
of their code to identify the part of Xorg that caused the bug.
  
thomas July 29, 2008, 7:34 a.m. UTC | #34
On Tue, Jul 22, 2008 at 11:17:26PM +0200, Thomas Hilber wrote:
> Unfortunately with Radeons we currently have 2 problems unsolved:
> 
> 1. there appears to be a tiny bug in XV overlay scaling code which
> sometimes mixes even and odd fields at certain resolutions. A workaround
> to compensate for this is to scale the opposite way. This is done by
> xineliboutput option 'Fullscreen mode: no, Window height: 575 px'
> instead of 'Window height: 575 px' (as noted in my configuration example).
> 
> Overlay XV uses double buffering eliminating any tearing effects. This
> works pretty good.
> 
> 2. the other way to use XV video extension is textured mode. This method
> shows very good results. No scaling problems at all. But this code is so 
> new (a few weeks), there even does not yet exist proper tearing protection 
> for.

the first issue has been fixed yesterday! The second one is void then.
Thanks to Roland Scheidegger I now get a perfect picture for all 
source resolutions testet so far.

http://lists.x.org/archives/xorg-driver-ati/2008-July/006143.html
http://www.vdr-portal.de/board/thread.php?postid=741778#post741778

There currently is only one known issue left: detection of inital field
polarity. I don't think this is a big deal. After this I can start
productive use of the patch on my living room VDR.

Maybe then I find some time to port the patch to other platforms 
(like intel based graphics cards).

Cheers
   Thomas
  
Pasi Kärkkäinen July 29, 2008, 10:40 a.m. UTC | #35
On Tue, Jul 29, 2008 at 09:34:39AM +0200, Thomas Hilber wrote:
> On Tue, Jul 22, 2008 at 11:17:26PM +0200, Thomas Hilber wrote:
> > Unfortunately with Radeons we currently have 2 problems unsolved:
> > 
> > 1. there appears to be a tiny bug in XV overlay scaling code which
> > sometimes mixes even and odd fields at certain resolutions. A workaround
> > to compensate for this is to scale the opposite way. This is done by
> > xineliboutput option 'Fullscreen mode: no, Window height: 575 px'
> > instead of 'Window height: 575 px' (as noted in my configuration example).
> > 
> > Overlay XV uses double buffering eliminating any tearing effects. This
> > works pretty good.
> > 
> > 2. the other way to use XV video extension is textured mode. This method
> > shows very good results. No scaling problems at all. But this code is so 
> > new (a few weeks), there even does not yet exist proper tearing protection 
> > for.
> 
> the first issue has been fixed yesterday! The second one is void then.
> Thanks to Roland Scheidegger I now get a perfect picture for all 
> source resolutions testet so far.
> 
> http://lists.x.org/archives/xorg-driver-ati/2008-July/006143.html
> http://www.vdr-portal.de/board/thread.php?postid=741778#post741778
> 

Nice progress!

> There currently is only one known issue left: detection of inital field
> polarity. I don't think this is a big deal. After this I can start
> productive use of the patch on my living room VDR.
>

:)
 
> Maybe then I find some time to port the patch to other platforms 
> (like intel based graphics cards).
> 

That would rock.

btw any chance of getting these patches accepted/integrated upstream? 

-- Pasi
  
thomas July 30, 2008, 5:43 a.m. UTC | #36
On Tue, Jul 29, 2008 at 01:40:49PM +0300, Pasi Kärkkäinen wrote:
> > Maybe then I find some time to port the patch to other platforms 
> > (like intel based graphics cards).
>
> That would rock.

maybe this way we could fix current issues with S100. Picture quality
dramaticly improves if deinterlacer is switched off.

Anyway they made a big step forward these days:

http://forum.zenega-user.de/viewtopic.php?f=17&t=5440&start=15#p43241

> btw any chance of getting these patches accepted/integrated upstream? 

I don't think we get upstream support in the near future. Since TV 
applications are the only ones that need to synchronize VGA timing to 
an external signal.

-Thomas
  
Gavin Hamill Aug. 8, 2008, 8:23 p.m. UTC | #37
On Tue, 2008-07-22 at 18:37 +0200, Thomas Hilber wrote:
> Hi list,
> 

Finally I have had a chance to try these patches - I managed to get an
old Radeon 7000 PCI (RV100)...

I am using a fresh bare install of Ubuntu hardy which ships xine-lib
1.1.11, but the patches don't compile :( The Makefile.am changed a
little but I was able to amend that manually, but the video_out_xv.c
spews out this:

video_out_xv.c: In function ‘xv_update_frame_format’:
video_out_xv.c:475: warning: pointer targets in assignment differ in
signedness
video_out_xv.c:481: warning: pointer targets in assignment differ in
signedness
video_out_xv.c:482: warning: pointer targets in assignment differ in
signedness
video_out_xv.c:483: warning: pointer targets in assignment differ in
signedness
video_out_xv.c: In function ‘xv_deinterlace_frame’:
video_out_xv.c:538: warning: pointer targets in assignment differ in
signedness
video_out_xv.c:543: warning: pointer targets in passing argument 1 of
‘deinterlace_yuv’ differ in signedness
video_out_xv.c:547: warning: pointer targets in assignment differ in
signedness
video_out_xv.c:552: warning: pointer targets in passing argument 1 of
‘deinterlace_yuv’ differ in signedness
video_out_xv.c:566: warning: pointer targets in assignment differ in
signedness
video_out_xv.c:571: warning: pointer targets in passing argument 1 of
‘deinterlace_yuv’ differ in signedness
video_out_xv.c:582: warning: pointer targets in assignment differ in
signedness
video_out_xv.c:583: warning: pointer targets in assignment differ in
signedness
video_out_xv.c:590: warning: pointer targets in assignment differ in
signedness
video_out_xv.c:591: warning: pointer targets in assignment differ in
signedness
video_out_xv.c:598: warning: pointer targets in assignment differ in
signedness
video_out_xv.c:599: warning: pointer targets in assignment differ in
signedness
video_out_xv.c: In function ‘xv_display_frame’:
video_out_xv.c:845: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or
‘__attribute__’ before ‘vsync’
video_out_xv.c:845: error: ‘vsync’ undeclared (first use in this
function)
video_out_xv.c:845: error: (Each undeclared identifier is reported only
once
video_out_xv.c:845: error: for each function it appears in.)
video_out_xv.c:853: error: ‘RADEON_SETPARAM_VBLANK_CRTC’ undeclared
(first use in this function)
video_out_xv.c:854: error: ‘DRM_RADEON_VBLANK_CRTC1’ undeclared (first
use in this function)
video_out_xv.c:859: error: ‘DRM_IOCTL_RADEON_VSYNC’ undeclared (first
use in this function)
video_out_xv.c: In function ‘open_plugin_2’:
video_out_xv.c:1769: warning: passing argument 4 of
‘config->register_enum’ from incompatible pointer type
make[4]: *** [xineplug_vo_out_xv_la-video_out_xv.lo] 

I'm no coder so I don't know what I'm looking for.. any advice would be warmly welcomed!


Cheers,
Gavin.
  
thomas Aug. 9, 2008, 3:16 a.m. UTC | #38
On Fri, Aug 08, 2008 at 09:23:34PM +0100, Gavin Hamill wrote:
> Finally I have had a chance to try these patches - I managed to get an
> old Radeon 7000 PCI (RV100)...

nice!

> I am using a fresh bare install of Ubuntu hardy which ships xine-lib
> 1.1.11, but the patches don't compile :( The Makefile.am changed a
> little but I was able to amend that manually, but the video_out_xv.c
> spews out this:
> 
> video_out_xv.c: In function ???xv_update_frame_format???:
[...]

In the meantime I reworked everything from scratch. A patch currently is
only applied against radeon-DRM and xserver-xorg-video-ati.

Xine library isn't touched any more though this will change in the
future. Latest version of the patch is available at:

http://lowbyte.de/vga-sync-fields

Cheers
  Thomas
  
Goga777 Aug. 9, 2008, 7:26 p.m. UTC | #39
Hi Thomas

does your idea actually for new generation cards - ATI HD series, Intel G35/45 chipsets with hdmi output ?

Goga
  
thomas Aug. 10, 2008, 4:54 a.m. UTC | #40
On Sat, Aug 09, 2008 at 11:26:21PM +0400, Goga777 wrote:
> does your idea actually for new generation cards - ATI HD series, Intel G35/45 chipsets with hdmi output ?

currently it does for everything pre-avivo (e.g. before r500 with the 
exception of rs690 which is a r300-style 3d core but 2d is avivo).

I not yet tried with ATI HD or Intel G35/45. Basically I don't see a
problem. The devil is in the details:)

To ease the port to other graphics hardware I did not use special 
pre-avivo registers in my current solution. 

The idea comprises several aspects:

The most important feature is to synchronize video output with the
stream. Nobody cared about that until today. I do not understand that at
all.

It just a pleasant by-product of the sync that in some cases you need not to
deinterlace anymore.

Cheers
  Thomas
  
Goga777 Aug. 10, 2008, 3:05 p.m. UTC | #41
thanks for your answer

but for 3d games this problem also actually ? or this issue exists only for video playback ?

> > does your idea actually for new generation cards - ATI HD series, Intel G35/45 chipsets with hdmi output ?
> 
> currently it does for everything pre-avivo (e.g. before r500 with the 
> exception of rs690 which is a r300-style 3d core but 2d is avivo).
> 
> I not yet tried with ATI HD or Intel G35/45. Basically I don't see a
> problem. The devil is in the details:)
> 
> To ease the port to other graphics hardware I did not use special 
> pre-avivo registers in my current solution. 
> 
> The idea comprises several aspects:
> 
> The most important feature is to synchronize video output with the
> stream. Nobody cared about that until today. I do not understand that at
> all.
> 
> It just a pleasant by-product of the sync that in some cases you need not to
> deinterlace anymore.
  
thomas Aug. 10, 2008, 4:13 p.m. UTC | #42
On Sun, Aug 10, 2008 at 07:05:20PM +0400, Goga777 wrote:
> but for 3d games this problem also actually ? or this issue exists only for video playback ?

3d and playback from disk sync to the rate provided by the graphics card.
What is of course not possible for live-TV.

Cheers
   Thomas
  
Jouni Karvo Aug. 11, 2008, 4:40 p.m. UTC | #43
hi,

with NVIDIA driver 169 and 173 at least, this does not yet work:

Thomas Hilber kirjoitti:
>
> I just use one big hammer instead:)
>
> Option "UseEDID" "FALSE"
>
> That works (mostly).
>   

And the reason is easily read from the driver's README:

    Because these TV modes only depend on the TV encoder and the TV
standard, TV
    modes do not go through normal mode validation. The X configuration
options
    HorizSync and VertRefresh are not used for TV mode validation.

    Additionally, the NVIDIA driver contains a hardcoded list of mode
sizes that
    it can drive for each combination of TV encoder and TV standard.
Therefore,
    custom modelines in your X configuration file are ignored for TVs.

Setting TV format to PAL-B results in the following modeline (with
prefedined 720x576):

DISPLAY=:0.0 xvidtune -show
"720x576"      31.50    720  760  840  880    576  585  588  597 -hsync
-vsync

and PAL-G:
DISPLAY=:0.0 xvidtune -show
"720x576"      31.50    720  760  840  880    576  585  588  597 -hsync
-vsync

(does not change at all...)

I have no idea on whether this is 50Hz or 60Hz - I guess not interlaced
at least.

So the question is if you have used VGA instead of TVout, or tricked the
driver somehow to respect your own modelines...

I add the relevant part of Xorg.0.log, so you can see what the modelines
available are:

(**) NVIDIA(0): Ignoring EDIDs
(II) NVIDIA(0): Support for GLX with the Damage and Composite X
extensions is
(II) NVIDIA(0):     enabled.
(II) NVIDIA(0): NVIDIA GPU GeForce FX 5200 (NV34) at PCI:1:0:0 (GPU-0)
(--) NVIDIA(0): Memory: 131072 kBytes
(II) NVIDIA(0): GPU RAM Type: DDR1
(--) NVIDIA(0): VideoBIOS: 04.34.20.87.00
(--) NVIDIA(0): Found 2 CRTCs on board
(II) NVIDIA(0): Supported display device(s): CRT-0, CRT-1, DFP-0, TV-0
(II) NVIDIA(0): Bus detected as AGP
(II) NVIDIA(0): Detected AGP rate: 8X
(--) NVIDIA(0): Interlaced video modes are supported on this GPU
(II) NVIDIA(0):
(II) NVIDIA(0): Mode timing constraints for  : GeForce FX 5200
(II) NVIDIA(0): Maximum mode timing values   :
(II) NVIDIA(0):     Horizontal Visible Width : 8192
(II) NVIDIA(0):     Horizontal Blank Start   : 8192
(II) NVIDIA(0):     Horizontal Blank Width   : 4096
(II) NVIDIA(0):     Horizontal Sync Start    : 8184
(II) NVIDIA(0):     Horizontal Sync Width    : 504
(II) NVIDIA(0):     Horizontal Total Width   : 8224
(II) NVIDIA(0):     Vertical Visible Height  : 8192
(II) NVIDIA(0):     Vertical Blank Start     : 8192
(II) NVIDIA(0):     Vertical Blank Width     : 256
(II) NVIDIA(0):     Veritcal Sync Start      : 8191
(II) NVIDIA(0):     Vertical Sync Width      : 15
(II) NVIDIA(0):     Vertical Total Height    : 8193
(II) NVIDIA(0):
(II) NVIDIA(0): Minimum mode timing values   :
(II) NVIDIA(0):     Horizontal Total Width   : 40
(II) NVIDIA(0):     Vertical Total Height    : 2
(II) NVIDIA(0):
(II) NVIDIA(0): Mode timing alignment        :
(II) NVIDIA(0):     Horizontal Visible Width : multiples of 8
(II) NVIDIA(0):     Horizontal Blank Start   : multiples of 8
(II) NVIDIA(0):     Horizontal Blank Width   : multiples of 8
(II) NVIDIA(0):     Horizontal Sync Start    : multiples of 8
(II) NVIDIA(0):     Horizontal Sync Width    : multiples of 8
(II) NVIDIA(0):     Horizontal Total Width   : multiples of 8
(II) NVIDIA(0):
(--) NVIDIA(0): Connected display device(s) on GeForce FX 5200 at PCI:1:0:0:
(--) NVIDIA(0):     NVIDIA TV Encoder (TV-0)
(--) NVIDIA(0): NVIDIA TV Encoder (TV-0): 350.0 MHz maximum pixel clock
(--) NVIDIA(0): TV encoder: NVIDIA
(II) NVIDIA(0): TV modes supported by this encoder:
(II) NVIDIA(0):   1024x768; Standards: NTSC-M, NTSC-J, PAL-M, PAL-BDGHI,
(II) NVIDIA(0):     PAL-N, PAL-NC
(II) NVIDIA(0):   800x600; Standards: NTSC-M, NTSC-J, PAL-M, PAL-BDGHI,
PAL-N,
(II) NVIDIA(0):     PAL-NC
(II) NVIDIA(0):   720x576; Standards: PAL-BDGHI, PAL-N, PAL-NC
(II) NVIDIA(0):   720x480; Standards: NTSC-M, NTSC-J, PAL-M
(II) NVIDIA(0):   640x480; Standards: NTSC-M, NTSC-J, PAL-M, PAL-BDGHI,
PAL-N,
(II) NVIDIA(0):     PAL-NC
(II) NVIDIA(0):   640x400; Standards: NTSC-M, NTSC-J, PAL-M, PAL-BDGHI,
PAL-N,
(II) NVIDIA(0):     PAL-NC
(II) NVIDIA(0):   400x300; Standards: NTSC-M, NTSC-J, PAL-M, PAL-BDGHI,
PAL-N,
(II) NVIDIA(0):     PAL-NC
(II) NVIDIA(0):   320x240; Standards: NTSC-M, NTSC-J, PAL-M, PAL-BDGHI,
PAL-N,
(II) NVIDIA(0):     PAL-NC
(II) NVIDIA(0):   320x200; Standards: NTSC-M, NTSC-J, PAL-M, PAL-BDGHI,
PAL-N,
(II) NVIDIA(0):     PAL-NC
(II) NVIDIA(0): Frequency information for NVIDIA TV Encoder (TV-0):
(II) NVIDIA(0):   HorizSync   : 15.000-16.000 kHz
(II) NVIDIA(0):   VertRefresh : 43.000-72.000 Hz
(II) NVIDIA(0):     (HorizSync from HorizSync in X Config Monitor section)
(II) NVIDIA(0):     (VertRefresh from Conservative Defaults)
(II) NVIDIA(0): Note that the HorizSync and VertRefresh frequency ranges are
(II) NVIDIA(0):     ignored for TV Display Devices; modetimings for TVs will
(II) NVIDIA(0):     be selected based on the capabilities of the NVIDIA TV
(II) NVIDIA(0):     encoder.
(II) NVIDIA(0):
(II) NVIDIA(0): --- Modes in ModePool for NVIDIA TV Encoder (TV-0) ---
(II) NVIDIA(0): "nvidia-auto-select" : 1024 x 768; for use with TV
standards: NTSC-M, NTSC-J, PAL-M, PAL-BDGHI, PAL-N, PAL-NC (from: NVIDIA
Predefined)
(II) NVIDIA(0): "1024x768"           : 1024 x 768; for use with TV
standards: NTSC-M, NTSC-J, PAL-M, PAL-BDGHI, PAL-N, PAL-NC (from: NVIDIA
Predefined)
(II) NVIDIA(0): "800x600"            : 800 x 600; for use with TV
standards: NTSC-M, NTSC-J, PAL-M, PAL-BDGHI, PAL-N, PAL-NC (from: NVIDIA
Predefined)
(II) NVIDIA(0): "720x576"            : 720 x 576; for use with TV
standards: PAL-BDGHI, PAL-N, PAL-NC (from: NVIDIA Predefined)
(II) NVIDIA(0): "640x480"            : 640 x 480; for use with TV
standards: NTSC-M, NTSC-J, PAL-M, PAL-BDGHI, PAL-N, PAL-NC (from: NVIDIA
Predefined)
(II) NVIDIA(0): "640x400"            : 640 x 400; for use with TV
standards: NTSC-M, NTSC-J, PAL-M, PAL-BDGHI, PAL-N, PAL-NC (from: NVIDIA
Predefined)
(II) NVIDIA(0): "400x300"            : 400 x 300; for use with TV
standards: NTSC-M, NTSC-J, PAL-M, PAL-BDGHI, PAL-N, PAL-NC (from: NVIDIA
Predefined)
(II) NVIDIA(0): "320x240"            : 320 x 240; for use with TV
standards: NTSC-M, NTSC-J, PAL-M, PAL-BDGHI, PAL-N, PAL-NC (from: NVIDIA
Predefined)
(II) NVIDIA(0): "320x200"            : 320 x 200; for use with TV
standards: NTSC-M, NTSC-J, PAL-M, PAL-BDGHI, PAL-N, PAL-NC (from: NVIDIA
Predefined)
(II) NVIDIA(0): --- End of ModePool for NVIDIA TV Encoder (TV-0): ---
(II) NVIDIA(0):
(II) NVIDIA(0): Assigned Display Device: TV-0
(II) NVIDIA(0): Requested modes:
(II) NVIDIA(0):     "720x576PAL"
(II) NVIDIA(0):     "720x576@50i"
(II) NVIDIA(0):     "720x576i"
(II) NVIDIA(0):     "720x576"
(WW) NVIDIA(0): No valid modes for "720x576PAL"; removing.
(WW) NVIDIA(0): No valid modes for "720x576@50i"; removing.
(WW) NVIDIA(0): No valid modes for "720x576i"; removing.
(II) NVIDIA(0): Validated modes:
(II) NVIDIA(0): MetaMode "720x576":
(II) NVIDIA(0):     Bounding Box: [0, 0, 720, 576]
(II) NVIDIA(0):     NVIDIA TV Encoder (TV-0): "720x576"
(II) NVIDIA(0):         Size          : 720 x 576
(II) NVIDIA(0):         Offset        : +0 +0
(II) NVIDIA(0):         Panning Domain: @ 720 x 576
(II) NVIDIA(0):         Position      : [0, 0, 720, 576]
(II) NVIDIA(0): Virtual screen size determined to be 720 x 576

It seems that NVIDIA also supports HD576i "TVStandard", but I don't know
what to put on the "Modes"-line for that.  At least 720x576 fails.

yours,
       Jouni
  
thomas Aug. 12, 2008, 7:05 a.m. UTC | #44
On Mon, Aug 11, 2008 at 07:40:15PM +0300, Jouni Karvo wrote:
> with NVIDIA driver 169 and 173 at least, this does not yet work:

the patch is not yet ported to nVidia that's true. 

Independent from that you can configure the nVidia-Xserver to output a 
PAL/RGB compatible signal. And connect a CRT via a VGA-to-SCART cable.

But until the patch is ported to nVidia (if ever) you must use a deinterlacer.

I attached my 'xorg.conf' and 'Xorg.0.log' which runs in several 
configurations here without problems. Maybe you give it a try.

BTW:
we do not use any of these evil TV encoder things. Just forget about
that.

Cheers
   Thomas
Section "ServerLayout"
    Identifier     "Default Layout"
    Screen         "Default Screen" 0 0
    InputDevice    "Generic Keyboard"
    InputDevice    "Configured Mouse"
    Option         "BlankTime" "0"
    Option         "StandbyTime" "0"
    Option         "SuspendTime" "0"
    Option         "OffTime" "0"
EndSection

Section "Files"
    FontPath        "/usr/share/fonts/X11/misc"
EndSection

Section "Module"
    Load           "i2c"
    Load           "bitmap"
    Load           "ddc"
    Load           "extmod"
    Load           "freetype"
    Load           "int10"
    Load           "vbe"
EndSection

Section "ServerFlags"
    Option         "AllowMouseOpenFail" "on"
EndSection

Section "InputDevice"
    Identifier     "Generic Keyboard"
    Driver         "kbd"
    Option         "CoreKeyboard"
    Option         "XkbRules" "xorg"
    Option         "XkbModel" "pc104"
    Option         "XkbLayout" "us"
EndSection

Section "InputDevice"
    Identifier     "Configured Mouse"
    Driver         "mouse"
    Option         "CorePointer"
    Option         "Device" "/dev/input/mice"
    Option         "Protocol" "ImPS/2"
    Option         "Emulate3Buttons" "true"
EndSection

Section "Monitor"
        Identifier      "Generic Monitor"
        Option          "DPMS"

        HorizSync 15-16          
        Modeline "720x576i"   13.875 720  744  808  888  576  580  585  625 -HSync -Vsync interlace 
EndSection

Section "Device"
    Option "UseEDID" "FALSE"
    Option "UseEvents" "True"
    Option "NoLogo" "True"

    Identifier     "Generic Video Card"
    Driver         "nvidia"
EndSection

Section "Screen"
    Identifier     "Default Screen"
    Device         "Generic Video Card"
    Monitor        "Generic Monitor"
    DefaultDepth   24

    SubSection     "Display"
	Depth      24
	Modes      "720x576i"           #RGB
    EndSubSection
EndSection
X Window System Version 7.1.1
Release Date: 12 May 2006
X Protocol Version 11, Revision 0, Release 7.1.1
Build Operating System: UNKNOWN 
Current Operating System: Linux undara 2.6.22-3-k7 #1 SMP Tue Dec 18 14:55:50 CET 2007 i686
Build Date: 29 May 2008
	Before reporting problems, check http://wiki.x.org
	to make sure that you have the latest version.
Module Loader present
Markers: (--) probed, (**) from config file, (==) default setting,
	(++) from command line, (!!) notice, (II) informational,
	(WW) warning, (EE) error, (NI) not implemented, (??) unknown.
(==) Log file: "/var/log/Xorg.0.log", Time: Tue Aug 12 08:25:18 2008
(==) Using config file: "/etc/X11/xorg.conf"
(==) ServerLayout "Default Layout"
(**) |-->Screen "Default Screen" (0)
(**) |   |-->Monitor "Generic Monitor"
(**) |   |-->Device "Generic Video Card"
(**) |-->Input Device "Generic Keyboard"
(**) |-->Input Device "Configured Mouse"
(**) FontPath set to:
	/usr/share/fonts/X11/misc
(==) RgbPath set to "/etc/X11/rgb"
(==) ModulePath set to "/usr/lib/xorg/modules"
(**) Option "AllowMouseOpenFail" "on"
(**) Option "BlankTime" "0"
(**) Option "StandbyTime" "0"
(**) Option "SuspendTime" "0"
(**) Option "OffTime" "0"
(II) Open ACPI successful (/var/run/acpid.socket)
(II) Module ABI versions:
	X.Org ANSI C Emulation: 0.3
	X.Org Video Driver: 1.0
	X.Org XInput driver : 0.6
	X.Org Server Extension : 0.3
	X.Org Font Renderer : 0.5
(II) Loader running on linux
(II) LoadModule: "bitmap"
(II) Loading /usr/lib/xorg/modules/fonts/libbitmap.so
(II) Module bitmap: vendor="X.Org Foundation"
	compiled for 7.1.1, module version = 1.0.0
	Module class: X.Org Font Renderer
	ABI class: X.Org Font Renderer, version 0.5
(II) Loading font Bitmap
(II) LoadModule: "pcidata"
(II) Loading /usr/lib/xorg/modules/libpcidata.so
(II) Module pcidata: vendor="X.Org Foundation"
	compiled for 7.1.1, module version = 1.0.0
	ABI class: X.Org Video Driver, version 1.0
(++) using VT number 7

(WW) xf86OpenConsole: setpgid failed: Operation not permitted
(WW) xf86OpenConsole: setsid failed: Operation not permitted
(II) PCI: PCI scan (all values are in hex)
(II) PCI: 00:00:0: chip 10de,02f0 card 1043,81c0 rev a2 class 05,00,00 hdr 80
(II) PCI: 00:00:1: chip 10de,02fa card 1043,81c0 rev a2 class 05,00,00 hdr 80
(II) PCI: 00:00:2: chip 10de,02fe card 1043,81c0 rev a2 class 05,00,00 hdr 80
(II) PCI: 00:00:3: chip 10de,02f8 card 1043,81c0 rev a2 class 05,00,00 hdr 80
(II) PCI: 00:00:4: chip 10de,02f9 card 1043,81c0 rev a2 class 05,00,00 hdr 00
(II) PCI: 00:00:5: chip 10de,02ff card 1043,81c0 rev a2 class 05,00,00 hdr 80
(II) PCI: 00:00:6: chip 10de,027f card 1043,81c0 rev a2 class 05,00,00 hdr 80
(II) PCI: 00:00:7: chip 10de,027e card 1043,81c0 rev a2 class 05,00,00 hdr 80
(II) PCI: 00:05:0: chip 10de,0240 card 1043,81cd rev a2 class 03,00,00 hdr 00
(II) PCI: 00:09:0: chip 10de,0270 card 1043,81c0 rev a2 class 05,00,00 hdr 00
(II) PCI: 00:0a:0: chip 10de,0260 card 1043,81c0 rev a3 class 06,01,00 hdr 80
(II) PCI: 00:0a:1: chip 10de,0264 card 1043,81c0 rev a3 class 0c,05,00 hdr 80
(II) PCI: 00:0a:2: chip 10de,0272 card 1043,81c0 rev a3 class 05,00,00 hdr 80
(II) PCI: 00:0b:0: chip 10de,026d card 1043,81c0 rev a3 class 0c,03,10 hdr 80
(II) PCI: 00:0b:1: chip 10de,026e card 1043,81c0 rev a3 class 0c,03,20 hdr 80
(II) PCI: 00:0d:0: chip 10de,0265 card 1043,81c0 rev a1 class 01,01,8a hdr 00
(II) PCI: 00:10:0: chip 10de,026f card 0000,0000 rev a2 class 06,04,01 hdr 81
(II) PCI: 00:10:1: chip 10de,026c card 1043,81cb rev a2 class 04,03,00 hdr 80
(II) PCI: 00:14:0: chip 10de,0269 card 1043,816a rev a3 class 06,80,00 hdr 00
(II) PCI: 00:18:0: chip 1022,1100 card 0000,0000 rev 00 class 06,00,00 hdr 80
(II) PCI: 00:18:1: chip 1022,1101 card 0000,0000 rev 00 class 06,00,00 hdr 80
(II) PCI: 00:18:2: chip 1022,1102 card 0000,0000 rev 00 class 06,00,00 hdr 80
(II) PCI: 00:18:3: chip 1022,1103 card 0000,0000 rev 00 class 06,00,00 hdr 80
(II) PCI: 01:09:0: chip 1131,7146 card 13c2,1018 rev 01 class 04,80,00 hdr 00
(II) PCI: 01:0e:0: chip 1131,7146 card 13c2,1018 rev 01 class 04,80,00 hdr 00
(II) PCI: End of PCI scan
(II) PCI-to-ISA bridge:
(II) Bus -1: bridge is at (0:10:0), (0,-1,-1), BCTRL: 0x0008 (VGA_EN is set)
(II) Subtractive PCI-to-PCI bridge:
(II) Bus 1: bridge is at (0:16:0), (0,1,1), BCTRL: 0x0204 (VGA_EN is cleared)
(II) Bus 1 I/O range:
	[0] -1	0	0x0000e000 - 0x0000e0ff (0x100) IX[B]
	[1] -1	0	0x0000e400 - 0x0000e4ff (0x100) IX[B]
	[2] -1	0	0x0000e800 - 0x0000e8ff (0x100) IX[B]
	[3] -1	0	0x0000ec00 - 0x0000ecff (0x100) IX[B]
(II) Bus 1 non-prefetchable memory range:
	[0] -1	0	0xfdd00000 - 0xfddfffff (0x100000) MX[B]
(II) Bus 1 prefetchable memory range:
	[0] -1	0	0xfde00000 - 0xfdefffff (0x100000) MX[B]
(II) Host-to-PCI bridge:
(II) Bus 0: bridge is at (0:24:0), (0,0,1), BCTRL: 0x0008 (VGA_EN is set)
(II) Bus 0 I/O range:
	[0] -1	0	0x00000000 - 0x0000ffff (0x10000) IX[B]
(II) Bus 0 non-prefetchable memory range:
	[0] -1	0	0x00000000 - 0xffffffff (0x0) MX[B]
(II) Bus 0 prefetchable memory range:
	[0] -1	0	0x00000000 - 0xffffffff (0x0) MX[B]
(--) PCI:*(0:5:0) nVidia Corporation C51PV [GeForce 6150] rev 162, Mem @ 0xfc000000/24, 0xe0000000/28, 0xfb000000/24
(II) Addressable bus resource ranges are
	[0] -1	0	0x00000000 - 0xffffffff (0x0) MX[B]
	[1] -1	0	0x00000000 - 0x0000ffff (0x10000) IX[B]
(II) OS-reported resource ranges:
	[0] -1	0	0x00100000 - 0x3fffffff (0x3ff00000) MX[B]E(B)
	[1] -1	0	0x000f0000 - 0x000fffff (0x10000) MX[B]
	[2] -1	0	0x000c0000 - 0x000effff (0x30000) MX[B]
	[3] -1	0	0x00000000 - 0x0009ffff (0xa0000) MX[B]
	[4] -1	0	0x0000ffff - 0x0000ffff (0x1) IX[B]
	[5] -1	0	0x00000000 - 0x000000ff (0x100) IX[B]
(II) Active PCI resource ranges:
	[0] -1	0	0xfddfe000 - 0xfddfe1ff (0x200) MX[B]
	[1] -1	0	0xfddff000 - 0xfddff1ff (0x200) MX[B]
	[2] -1	0	0xfe02d000 - 0xfe02dfff (0x1000) MX[B]
	[3] -1	0	0xfe028000 - 0xfe02bfff (0x4000) MX[B]
	[4] -1	0	0xfe02e000 - 0xfe02e0ff (0x100) MX[B]
	[5] -1	0	0xfe02f000 - 0xfe02ffff (0x1000) MX[B]
	[6] -1	0	0xfb000000 - 0xfbffffff (0x1000000) MX[B](B)
	[7] -1	0	0xe0000000 - 0xefffffff (0x10000000) MX[B](B)
	[8] -1	0	0xfc000000 - 0xfcffffff (0x1000000) MX[B](B)
	[9] -1	0	0x0000f000 - 0x0000f007 (0x8) IX[B]
	[10] -1	0	0x0000f400 - 0x0000f40f (0x10) IX[B]
	[11] -1	0	0x00004c40 - 0x00004c7f (0x40) IX[B]
	[12] -1	0	0x00004c00 - 0x00004c3f (0x40) IX[B]
(II) Active PCI resource ranges after removing overlaps:
	[0] -1	0	0xfddfe000 - 0xfddfe1ff (0x200) MX[B]
	[1] -1	0	0xfddff000 - 0xfddff1ff (0x200) MX[B]
	[2] -1	0	0xfe02d000 - 0xfe02dfff (0x1000) MX[B]
	[3] -1	0	0xfe028000 - 0xfe02bfff (0x4000) MX[B]
	[4] -1	0	0xfe02e000 - 0xfe02e0ff (0x100) MX[B]
	[5] -1	0	0xfe02f000 - 0xfe02ffff (0x1000) MX[B]
	[6] -1	0	0xfb000000 - 0xfbffffff (0x1000000) MX[B](B)
	[7] -1	0	0xe0000000 - 0xefffffff (0x10000000) MX[B](B)
	[8] -1	0	0xfc000000 - 0xfcffffff (0x1000000) MX[B](B)
	[9] -1	0	0x0000f000 - 0x0000f007 (0x8) IX[B]
	[10] -1	0	0x0000f400 - 0x0000f40f (0x10) IX[B]
	[11] -1	0	0x00004c40 - 0x00004c7f (0x40) IX[B]
	[12] -1	0	0x00004c00 - 0x00004c3f (0x40) IX[B]
(II) OS-reported resource ranges after removing overlaps with PCI:
	[0] -1	0	0x00100000 - 0x3fffffff (0x3ff00000) MX[B]E(B)
	[1] -1	0	0x000f0000 - 0x000fffff (0x10000) MX[B]
	[2] -1	0	0x000c0000 - 0x000effff (0x30000) MX[B]
	[3] -1	0	0x00000000 - 0x0009ffff (0xa0000) MX[B]
	[4] -1	0	0x0000ffff - 0x0000ffff (0x1) IX[B]
	[5] -1	0	0x00000000 - 0x000000ff (0x100) IX[B]
(II) All system resource ranges:
	[0] -1	0	0x00100000 - 0x3fffffff (0x3ff00000) MX[B]E(B)
	[1] -1	0	0x000f0000 - 0x000fffff (0x10000) MX[B]
	[2] -1	0	0x000c0000 - 0x000effff (0x30000) MX[B]
	[3] -1	0	0x00000000 - 0x0009ffff (0xa0000) MX[B]
	[4] -1	0	0xfddfe000 - 0xfddfe1ff (0x200) MX[B]
	[5] -1	0	0xfddff000 - 0xfddff1ff (0x200) MX[B]
	[6] -1	0	0xfe02d000 - 0xfe02dfff (0x1000) MX[B]
	[7] -1	0	0xfe028000 - 0xfe02bfff (0x4000) MX[B]
	[8] -1	0	0xfe02e000 - 0xfe02e0ff (0x100) MX[B]
	[9] -1	0	0xfe02f000 - 0xfe02ffff (0x1000) MX[B]
	[10] -1	0	0xfb000000 - 0xfbffffff (0x1000000) MX[B](B)
	[11] -1	0	0xe0000000 - 0xefffffff (0x10000000) MX[B](B)
	[12] -1	0	0xfc000000 - 0xfcffffff (0x1000000) MX[B](B)
	[13] -1	0	0x0000ffff - 0x0000ffff (0x1) IX[B]
	[14] -1	0	0x00000000 - 0x000000ff (0x100) IX[B]
	[15] -1	0	0x0000f000 - 0x0000f007 (0x8) IX[B]
	[16] -1	0	0x0000f400 - 0x0000f40f (0x10) IX[B]
	[17] -1	0	0x00004c40 - 0x00004c7f (0x40) IX[B]
	[18] -1	0	0x00004c00 - 0x00004c3f (0x40) IX[B]
(II) LoadModule: "i2c"
(II) Loading /usr/lib/xorg/modules/libi2c.so
(II) Module i2c: vendor="X.Org Foundation"
	compiled for 7.1.1, module version = 1.2.0
	ABI class: X.Org Video Driver, version 1.0
(II) LoadModule: "bitmap"
(II) Reloading /usr/lib/xorg/modules/fonts/libbitmap.so
(II) Loading font Bitmap
(II) LoadModule: "ddc"
(II) Loading /usr/lib/xorg/modules/libddc.so
(II) Module ddc: vendor="X.Org Foundation"
	compiled for 7.1.1, module version = 1.0.0
	ABI class: X.Org Video Driver, version 1.0
(II) LoadModule: "extmod"
(II) Loading /usr/lib/xorg/modules/extensions/libextmod.so
(II) Module extmod: vendor="X.Org Foundation"
	compiled for 7.1.1, module version = 1.0.0
	Module class: X.Org Server Extension
	ABI class: X.Org Server Extension, version 0.3
(II) Loading extension SHAPE
(II) Loading extension MIT-SUNDRY-NONSTANDARD
(II) Loading extension BIG-REQUESTS
(II) Loading extension SYNC
(II) Loading extension MIT-SCREEN-SAVER
(II) Loading extension XC-MISC
(II) Loading extension XFree86-VidModeExtension
(II) Loading extension XFree86-Misc
(II) Loading extension XFree86-DGA
(II) Loading extension DPMS
(II) Loading extension TOG-CUP
(II) Loading extension Extended-Visual-Information
(II) Loading extension XVideo
(II) Loading extension XVideo-MotionCompensation
(II) Loading extension X-Resource
(II) LoadModule: "freetype"
(II) Loading /usr/lib/xorg/modules/fonts/libfreetype.so
(II) Module freetype: vendor="X.Org Foundation & the After X-TT Project"
	compiled for 7.1.1, module version = 2.1.0
	Module class: X.Org Font Renderer
	ABI class: X.Org Font Renderer, version 0.5
(II) Loading font FreeType
(II) LoadModule: "int10"
(II) Loading /usr/lib/xorg/modules/libint10.so
(II) Module int10: vendor="X.Org Foundation"
	compiled for 7.1.1, module version = 1.0.0
	ABI class: X.Org Video Driver, version 1.0
(II) LoadModule: "vbe"
(II) Loading /usr/lib/xorg/modules/libvbe.so
(II) Module vbe: vendor="X.Org Foundation"
	compiled for 7.1.1, module version = 1.1.0
	ABI class: X.Org Video Driver, version 1.0
(II) LoadModule: "nvidia"
(II) Loading /usr/lib/xorg/modules/drivers/nvidia_drv.so
(II) Module nvidia: vendor="NVIDIA Corporation"
	compiled for 4.0.2, module version = 1.0.0
	Module class: X.Org Video Driver
(II) LoadModule: "kbd"
(II) Loading /usr/lib/xorg/modules/input/kbd_drv.so
(II) Module kbd: vendor="X.Org Foundation"
	compiled for 7.1.1, module version = 1.1.0
	Module class: X.Org XInput Driver
	ABI class: X.Org XInput driver, version 0.6
(II) LoadModule: "mouse"
(II) Loading /usr/lib/xorg/modules/input/mouse_drv.so
(II) Module mouse: vendor="X.Org Foundation"
	compiled for 7.1.1, module version = 1.1.1
	Module class: X.Org XInput Driver
	ABI class: X.Org XInput driver, version 0.6
(II) NVIDIA dlloader X Driver  100.14.19  Wed Sep 12 14:14:20 PDT 2007
(II) NVIDIA Unified Driver for all Supported NVIDIA GPUs
(II) Primary Device is: PCI 00:05:0
(--) Assigning device section with no busID to primary device
(--) Chipset NVIDIA GPU found
(II) Loading sub module "fb"
(II) LoadModule: "fb"
(II) Loading /usr/lib/xorg/modules/libfb.so
(II) Module fb: vendor="X.Org Foundation"
	compiled for 7.1.1, module version = 1.0.0
	ABI class: X.Org ANSI C Emulation, version 0.3
(II) Loading sub module "wfb"
(II) LoadModule: "wfb"
(II) Loading /usr/lib/xorg/modules/libwfb.so
(II) Module wfb: vendor="NVIDIA Corporation"
	compiled for 7.1.99.2, module version = 1.0.0
(II) Loading sub module "ramdac"
(II) LoadModule: "ramdac"
(II) Loading /usr/lib/xorg/modules/libramdac.so
(II) Module ramdac: vendor="X.Org Foundation"
	compiled for 7.1.1, module version = 0.1.0
	ABI class: X.Org Video Driver, version 1.0
(II) resource ranges after xf86ClaimFixedResources() call:
	[0] -1	0	0x00100000 - 0x3fffffff (0x3ff00000) MX[B]E(B)
	[1] -1	0	0x000f0000 - 0x000fffff (0x10000) MX[B]
	[2] -1	0	0x000c0000 - 0x000effff (0x30000) MX[B]
	[3] -1	0	0x00000000 - 0x0009ffff (0xa0000) MX[B]
	[4] -1	0	0xfddfe000 - 0xfddfe1ff (0x200) MX[B]
	[5] -1	0	0xfddff000 - 0xfddff1ff (0x200) MX[B]
	[6] -1	0	0xfe02d000 - 0xfe02dfff (0x1000) MX[B]
	[7] -1	0	0xfe028000 - 0xfe02bfff (0x4000) MX[B]
	[8] -1	0	0xfe02e000 - 0xfe02e0ff (0x100) MX[B]
	[9] -1	0	0xfe02f000 - 0xfe02ffff (0x1000) MX[B]
	[10] -1	0	0xfb000000 - 0xfbffffff (0x1000000) MX[B](B)
	[11] -1	0	0xe0000000 - 0xefffffff (0x10000000) MX[B](B)
	[12] -1	0	0xfc000000 - 0xfcffffff (0x1000000) MX[B](B)
	[13] -1	0	0x0000ffff - 0x0000ffff (0x1) IX[B]
	[14] -1	0	0x00000000 - 0x000000ff (0x100) IX[B]
	[15] -1	0	0x0000f000 - 0x0000f007 (0x8) IX[B]
	[16] -1	0	0x0000f400 - 0x0000f40f (0x10) IX[B]
	[17] -1	0	0x00004c40 - 0x00004c7f (0x40) IX[B]
	[18] -1	0	0x00004c00 - 0x00004c3f (0x40) IX[B]
(II) resource ranges after probing:
	[0] -1	0	0x00100000 - 0x3fffffff (0x3ff00000) MX[B]E(B)
	[1] -1	0	0x000f0000 - 0x000fffff (0x10000) MX[B]
	[2] -1	0	0x000c0000 - 0x000effff (0x30000) MX[B]
	[3] -1	0	0x00000000 - 0x0009ffff (0xa0000) MX[B]
	[4] -1	0	0xfddfe000 - 0xfddfe1ff (0x200) MX[B]
	[5] -1	0	0xfddff000 - 0xfddff1ff (0x200) MX[B]
	[6] -1	0	0xfe02d000 - 0xfe02dfff (0x1000) MX[B]
	[7] -1	0	0xfe028000 - 0xfe02bfff (0x4000) MX[B]
	[8] -1	0	0xfe02e000 - 0xfe02e0ff (0x100) MX[B]
	[9] -1	0	0xfe02f000 - 0xfe02ffff (0x1000) MX[B]
	[10] -1	0	0xfb000000 - 0xfbffffff (0x1000000) MX[B](B)
	[11] -1	0	0xe0000000 - 0xefffffff (0x10000000) MX[B](B)
	[12] -1	0	0xfc000000 - 0xfcffffff (0x1000000) MX[B](B)
	[13] 0	0	0x000a0000 - 0x000affff (0x10000) MS[B]
	[14] 0	0	0x000b0000 - 0x000b7fff (0x8000) MS[B]
	[15] 0	0	0x000b8000 - 0x000bffff (0x8000) MS[B]
	[16] -1	0	0x0000ffff - 0x0000ffff (0x1) IX[B]
	[17] -1	0	0x00000000 - 0x000000ff (0x100) IX[B]
	[18] -1	0	0x0000f000 - 0x0000f007 (0x8) IX[B]
	[19] -1	0	0x0000f400 - 0x0000f40f (0x10) IX[B]
	[20] -1	0	0x00004c40 - 0x00004c7f (0x40) IX[B]
	[21] -1	0	0x00004c00 - 0x00004c3f (0x40) IX[B]
	[22] 0	0	0x000003b0 - 0x000003bb (0xc) IS[B]
	[23] 0	0	0x000003c0 - 0x000003df (0x20) IS[B]
(II) Setting vga for screen 0.
(**) NVIDIA(0): Depth 24, (--) framebuffer bpp 32
(==) NVIDIA(0): RGB weight 888
(==) NVIDIA(0): Default visual is TrueColor
(==) NVIDIA(0): Using gamma correction (1.0, 1.0, 1.0)
(**) NVIDIA(0): Option "NoLogo" "True"
(**) NVIDIA(0): Option "UseEDID" "FALSE"
(**) NVIDIA(0): Option "UseEvents" "True"
(**) NVIDIA(0): Enabling RENDER acceleration
(**) NVIDIA(0): Ignoring EDIDs
(EE) NVIDIA(0): Failed to initialize the GLX module; please check in your X
(EE) NVIDIA(0):     log file that the GLX module has been loaded in your X
(EE) NVIDIA(0):     server, and that the module is the NVIDIA GLX module.  If
(EE) NVIDIA(0):     you continue to encounter problems, Please try
(EE) NVIDIA(0):     reinstalling the NVIDIA driver.
(II) NVIDIA(GPU-0): Not probing EDID on CRT-0.
(II) NVIDIA(0): NVIDIA GPU GeForce 6150 (C51) at PCI:0:5:0 (GPU-0)
(--) NVIDIA(0): Memory: 262144 kBytes
(--) NVIDIA(0): VideoBIOS: 05.51.22.33.07
(--) NVIDIA(0): Interlaced video modes are supported on this GPU
(--) NVIDIA(0): Connected display device(s) on GeForce 6150 at PCI:0:5:0:
(--) NVIDIA(0):     CRT-0
(--) NVIDIA(0): CRT-0: 350.0 MHz maximum pixel clock
(II) NVIDIA(0): Assigned Display Device: CRT-0
(II) NVIDIA(0): Validated modes:
(II) NVIDIA(0):     "720x576i"
(II) NVIDIA(0): Virtual screen size determined to be 720 x 576
(WW) NVIDIA(0): Unable to get display device CRT-0's EDID; cannot compute DPI
(WW) NVIDIA(0):     from CRT-0's EDID.
(==) NVIDIA(0): DPI set to (75, 75); computed from built-in default
(==) NVIDIA(0): Disabling 32-bit ARGB GLX visuals.
(--) Depth 24 pixmap format is 32 bpp
(II) do I need RAC?  No, I don't.
(II) resource ranges after preInit:
	[0] 0	0	0xfb000000 - 0xfbffffff (0x1000000) MX[B]
	[1] 0	0	0xe0000000 - 0xefffffff (0x10000000) MX[B]
	[2] 0	0	0xfc000000 - 0xfcffffff (0x1000000) MX[B]
	[3] -1	0	0x00100000 - 0x3fffffff (0x3ff00000) MX[B]E(B)
	[4] -1	0	0x000f0000 - 0x000fffff (0x10000) MX[B]
	[5] -1	0	0x000c0000 - 0x000effff (0x30000) MX[B]
	[6] -1	0	0x00000000 - 0x0009ffff (0xa0000) MX[B]
	[7] -1	0	0xfddfe000 - 0xfddfe1ff (0x200) MX[B]
	[8] -1	0	0xfddff000 - 0xfddff1ff (0x200) MX[B]
	[9] -1	0	0xfe02d000 - 0xfe02dfff (0x1000) MX[B]
	[10] -1	0	0xfe028000 - 0xfe02bfff (0x4000) MX[B]
	[11] -1	0	0xfe02e000 - 0xfe02e0ff (0x100) MX[B]
	[12] -1	0	0xfe02f000 - 0xfe02ffff (0x1000) MX[B]
	[13] -1	0	0xfb000000 - 0xfbffffff (0x1000000) MX[B](B)
	[14] -1	0	0xe0000000 - 0xefffffff (0x10000000) MX[B](B)
	[15] -1	0	0xfc000000 - 0xfcffffff (0x1000000) MX[B](B)
	[16] 0	0	0x000a0000 - 0x000affff (0x10000) MS[B](OprD)
	[17] 0	0	0x000b0000 - 0x000b7fff (0x8000) MS[B](OprD)
	[18] 0	0	0x000b8000 - 0x000bffff (0x8000) MS[B](OprD)
	[19] -1	0	0x0000ffff - 0x0000ffff (0x1) IX[B]
	[20] -1	0	0x00000000 - 0x000000ff (0x100) IX[B]
	[21] -1	0	0x0000f000 - 0x0000f007 (0x8) IX[B]
	[22] -1	0	0x0000f400 - 0x0000f40f (0x10) IX[B]
	[23] -1	0	0x00004c40 - 0x00004c7f (0x40) IX[B]
	[24] -1	0	0x00004c00 - 0x00004c3f (0x40) IX[B]
	[25] 0	0	0x000003b0 - 0x000003bb (0xc) IS[B](OprU)
	[26] 0	0	0x000003c0 - 0x000003df (0x20) IS[B](OprU)
(II) NVIDIA(0): Initialized GART.
(II) NVIDIA(0): Setting mode "720x576i"
(II) Loading extension NV-GLX
(II) NVIDIA(0): NVIDIA 3D Acceleration Architecture Initialized
(II) NVIDIA(0): Using the NVIDIA 2D acceleration architecture
(==) NVIDIA(0): Backing store disabled
(==) NVIDIA(0): Silken mouse enabled
(**) Option "dpms"
(**) NVIDIA(0): DPMS enabled
(II) Loading extension NV-CONTROL
(==) RandR enabled
(II) Initializing built-in extension MIT-SHM
(II) Initializing built-in extension XInputExtension
(II) Initializing built-in extension XTEST
(II) Initializing built-in extension XKEYBOARD
(II) Initializing built-in extension XC-APPGROUP
(II) Initializing built-in extension SECURITY
(II) Initializing built-in extension XINERAMA
(II) Initializing built-in extension XFIXES
(II) Initializing built-in extension XFree86-Bigfont
(II) Initializing built-in extension RENDER
(II) Initializing built-in extension RANDR
(II) Initializing built-in extension COMPOSITE
(II) Initializing built-in extension DAMAGE
(II) Initializing built-in extension XEVIE
(**) Option "CoreKeyboard"
(**) Generic Keyboard: Core Keyboard
(**) Option "Protocol" "standard"
(**) Generic Keyboard: Protocol: standard
(**) Option "AutoRepeat" "500 30"
(**) Option "XkbRules" "xorg"
(**) Generic Keyboard: XkbRules: "xorg"
(**) Option "XkbModel" "pc104"
(**) Generic Keyboard: XkbModel: "pc104"
(**) Option "XkbLayout" "us"
(**) Generic Keyboard: XkbLayout: "us"
(**) Option "CustomKeycodes" "off"
(**) Generic Keyboard: CustomKeycodes disabled
(**) Option "Protocol" "ImPS/2"
(**) Configured Mouse: Device: "/dev/input/mice"
(**) Configured Mouse: Protocol: "ImPS/2"
(**) Option "CorePointer"
(**) Configured Mouse: Core Pointer
(**) Option "Device" "/dev/input/mice"
(**) Option "Emulate3Buttons" "true"
(**) Configured Mouse: Emulate3Buttons, Emulate3Timeout: 50
(**) Configured Mouse: ZAxisMapping: buttons 4 and 5
(**) Configured Mouse: Buttons: 9
(II) XINPUT: Adding extended input device "Configured Mouse" (type: MOUSE)
(II) XINPUT: Adding extended input device "Generic Keyboard" (type: KEYBOARD)
    xkb_keycodes             { include "xfree86+aliases(qwerty)" };
    xkb_types                { include "complete" };
    xkb_compatibility        { include "complete" };
    xkb_symbols              { include "pc(pc105)+us" };
    xkb_geometry             { include "pc(pc104)" };
(II) Configured Mouse: ps2EnableDataReporting: succeeded
  
Theunis Potgieter Aug. 12, 2008, 8:29 a.m. UTC | #45
Does somebody have a URL on how to make one? for d-sub to scart or the new
DVI (modern graphic cards) to scart?

On 12/08/2008, Thomas Hilber <vdr@toh.cx> wrote:
>
> On Mon, Aug 11, 2008 at 07:40:15PM +0300, Jouni Karvo wrote:
> > with NVIDIA driver 169 and 173 at least, this does not yet work:
>
>
> the patch is not yet ported to nVidia that's true.
>
> Independent from that you can configure the nVidia-Xserver to output a
> PAL/RGB compatible signal. And connect a CRT via a VGA-to-SCART cable.
>
> But until the patch is ported to nVidia (if ever) you must use a
> deinterlacer.
>
> I attached my 'xorg.conf' and 'Xorg.0.log' which runs in several
> configurations here without problems. Maybe you give it a try.
>
> BTW:
> we do not use any of these evil TV encoder things. Just forget about
> that.
>
> Cheers
>
>    Thomas
>
>
> _______________________________________________
> vdr mailing list
> vdr@linuxtv.org
> http://www.linuxtv.org/cgi-bin/mailman/listinfo/vdr
>
>
>
  
Pasi Kärkkäinen Aug. 12, 2008, 11:01 a.m. UTC | #46
On Tue, Aug 12, 2008 at 10:29:53AM +0200, Theunis Potgieter wrote:
> Does somebody have a URL on how to make one? for d-sub to scart or the new
> DVI (modern graphic cards) to scart?
>

http://www.sput.nl/hardware/tv-x.html

That URL was included in the first mail of this thread..

-- Pasi
 
> On 12/08/2008, Thomas Hilber <vdr@toh.cx> wrote:
> >
> > On Mon, Aug 11, 2008 at 07:40:15PM +0300, Jouni Karvo wrote:
> > > with NVIDIA driver 169 and 173 at least, this does not yet work:
> >
> >
> > the patch is not yet ported to nVidia that's true.
> >
> > Independent from that you can configure the nVidia-Xserver to output a
> > PAL/RGB compatible signal. And connect a CRT via a VGA-to-SCART cable.
> >
> > But until the patch is ported to nVidia (if ever) you must use a
> > deinterlacer.
> >
> > I attached my 'xorg.conf' and 'Xorg.0.log' which runs in several
> > configurations here without problems. Maybe you give it a try.
> >
> > BTW:
> > we do not use any of these evil TV encoder things. Just forget about
> > that.
> >
> > Cheers
> >
> >    Thomas
> >
> >
  
Gavin Hamill Aug. 12, 2008, 11:08 a.m. UTC | #47
On Tue, 2008-08-12 at 14:01 +0300, Pasi Kärkkäinen wrote:
> On Tue, Aug 12, 2008 at 10:29:53AM +0200, Theunis Potgieter wrote:
> > Does somebody have a URL on how to make one? for d-sub to scart or the new
> > DVI (modern graphic cards) to scart?
> >
> 
> http://www.sput.nl/hardware/tv-x.html
> 
> That URL was included in the first mail of this thread..

You can also use this if you have a Radeon and use 'composite' on the
modeline instead of '-hsync -vsync' :

http://www.idiots.org.uk/vga_rgb_scart/index.html

Just please be careful - you can destroy your TV by sending it VGA-spec
signals!

Cheers,
Gavin.
  
Pasi Kärkkäinen Aug. 12, 2008, 11:52 a.m. UTC | #48
On Tue, Aug 12, 2008 at 12:08:55PM +0100, Gavin Hamill wrote:
> On Tue, 2008-08-12 at 14:01 +0300, Pasi Kärkkäinen wrote:
> > On Tue, Aug 12, 2008 at 10:29:53AM +0200, Theunis Potgieter wrote:
> > > Does somebody have a URL on how to make one? for d-sub to scart or the new
> > > DVI (modern graphic cards) to scart?
> > >
> > 
> > http://www.sput.nl/hardware/tv-x.html
> > 
> > That URL was included in the first mail of this thread..
> 
> You can also use this if you have a Radeon and use 'composite' on the
> modeline instead of '-hsync -vsync' :
> 
> http://www.idiots.org.uk/vga_rgb_scart/index.html
> 
> Just please be careful - you can destroy your TV by sending it VGA-spec
> signals!
> 

These two links seem to have a bit different ways of doing the cable.. 
The first link has more "complicated" cable.. 

Can someone try and compare these or explain the differences? 

-- Pasi
  
Gavin Hamill Aug. 12, 2008, 12:49 p.m. UTC | #49
On Tue, 2008-08-12 at 14:52 +0300, Pasi Kärkkäinen wrote:

> These two links seem to have a bit different ways of doing the cable.. 
> The first link has more "complicated" cable.. 
> 
> Can someone try and compare these or explain the differences? 

Only Radeons can output a composite sync signal. That's why the second
link will only work on Radeons.

The simple circuit in the first link merely takes seperate horiz +
vertical syncs and combines them into the composite sync required for TV
display. As such it will work on any VGA card, but since Thomas' work is
restricted to supporting Radeons, there seems little point to make
things more complex by building a circuit rather than just a few wires
and one resistor :)

Note that pin 9 on the Radeon VGA port will provide +5V for you to feed
into SCART pin 16 to tell your TV that it's an RGB signal. i.e. you
don't need to take a feed from your PC PSU.

Cheers,
Gavin.


Cheers,
Gavin,
  
Pasi Kärkkäinen Aug. 12, 2008, 1:44 p.m. UTC | #50
On Wed, Jul 30, 2008 at 07:43:19AM +0200, Thomas Hilber wrote:
> On Tue, Jul 29, 2008 at 01:40:49PM +0300, Pasi Kärkkäinen wrote:
> > > Maybe then I find some time to port the patch to other platforms 
> > > (like intel based graphics cards).
> >
> > That would rock.
> 
> maybe this way we could fix current issues with S100. Picture quality
> dramaticly improves if deinterlacer is switched off.
> 
> Anyway they made a big step forward these days:
> 
> http://forum.zenega-user.de/viewtopic.php?f=17&t=5440&start=15#p43241
> 
> > btw any chance of getting these patches accepted/integrated upstream? 
> 
> I don't think we get upstream support in the near future. Since TV 
> applications are the only ones that need to synchronize VGA timing to 
> an external signal.
> 

Ok.. the other day you sent a mail saying you had reworked the patches, so
that made me wonder if it would be possible to make these patches friendly
enough to get them accepted upstream :) 

-- Pasi
  
thomas Aug. 12, 2008, 5:19 p.m. UTC | #51
On Tue, Aug 12, 2008 at 10:29:53AM +0200, Theunis Potgieter wrote:
> Does somebody have a URL on how to make one? for d-sub to scart or the new
> DVI (modern graphic cards) to scart?

my favorite cable works with all graphics cards supporting RGB. I don't 
think it's too complex:)

you find it in the 'README' of my packages at:

http://lowbyte.de/vga-sync-fields

or here:

==============================================================================
circuit diagram of my favorite VGA-to-SCART adaptor:

   VGA                                                SCART

     1 -O------------------------------------------O- 15 R
     2 -O------------------------------------------O- 11 G
     3 -O------------------------------------------O-  7 B

     6 -O---------------------------------------+--O- 13 R Gnd
     7 -O---------------------------------------+--O-  9 G Gnd
     8 -O---------------------------------------+--O-  5 B Gnd
    10 -O---------------------------------------+--O- 17   Gnd
                                                +--O- 14   Gnd
                                                +--O- 18   Gnd
               ------
     9 -O-----|  75R |-----------------------------O- 16
               ------
-VS 14 -O-----------------------+
                                |
                             | /
               ------        |C
-HS 13 -O-----| 680R |-----B-|     BC 547 B
               ------        |E
                             | \
                                |       ------
                                +------| 680R |----O- 20 -CS
                                        ------
  shell-O------------------------------------------O- 21 shell

==============================================================================

Cheers
   Thomas
  
Goga777 Aug. 12, 2008, 8:03 p.m. UTC | #52
Hi

again, several questions :)

does your project actually fro dvb 720p channels too ?
and I have may be stupid question - why frame rate from satellites is not stable ? is it 50i ?

Goga
  
Gavin Hamill Aug. 12, 2008, 9:44 p.m. UTC | #53
On Tue, 2008-08-12 at 19:19 +0200, Thomas Hilber wrote:
> On Tue, Aug 12, 2008 at 10:29:53AM +0200, Theunis Potgieter wrote:

Hi again, Thomas :)

I have a system not dissimilar to yours.. it's a P3-1GHz with PCI Radeon
7000. OS is Ubuntu hardy with the patches from your 0.0.2 release. 

Now that I've got the patches in place, I get a stable desktop display
on the TV. 

If I start vdr / xineliboutput, the picture will be Ok for a second..
then it'll move up and down (like camera wobble, but it moves the
onscreen logos / VDR menus, too!).

I see this kind of thing at least once per second :

[ 2706.402871] [drm] changed radeon drift trim from 00520125 -> 0052018c

If I quit vdr (leaving X running), and run the 'drift_control' tool, I
see a drift speed of approx -3900 for 4 seconds, then +16000 marked 
'excessive drift speed'

It's much the same story on the output of the 'startx' console..  lots
of  <- resyncing field polarity M1 -> and  

sync point displacement:       -365
drift speed:                 -13004 excessive drift speed
overall compensation:          -461 

every couple of seconds :(

The system has no load since it'll become my new VDR box (hopefully :)

Cheers,
Gavin.
  
thomas Aug. 13, 2008, 2:39 p.m. UTC | #54
On Tue, Aug 12, 2008 at 10:44:37PM +0100, Gavin Hamill wrote:
> I have a system not dissimilar to yours.. it's a P3-1GHz with PCI Radeon
> 7000. OS is Ubuntu hardy with the patches from your 0.0.2 release. 

ok

> Now that I've got the patches in place, I get a stable desktop display
> on the TV. 

good

> If I start vdr / xineliboutput, the picture will be Ok for a second..
> then it'll move up and down (like camera wobble, but it moves the
> onscreen logos / VDR menus, too!).

I guess it's because it wants to resync the inital field polarity.

> I see this kind of thing at least once per second :
> 
> [ 2706.402871] [drm] changed radeon drift trim from 00520125 -> 0052018c

right. The lowest byte 8c confirms my assumption about field polarity.

> If I quit vdr (leaving X running), and run the 'drift_control' tool, I
> see a drift speed of approx -3900 for 4 seconds, then +16000 marked 
> 'excessive drift speed'

I think it is the best to gradually step forward. Could you please do
the following and report the results. To be on the safe side I just repeated
all steps myself with reproducible results:

1. for the moment please encomment both in 'radeon_video.c' and 'drift_control'

//#define RESYNC_FIELD_POLARITY_METHOD1
//#define RESYNC_FIELD_POLARITY_METHOD2

because this must clearly be fixed in xine. Though it works in my
current configuration. Maybe we can reenable it later.

2. start the Xserver (but still without vdr)
3. run 'drift_control a'

this should give you typically an output like this:

# drift_control a
tv now:           1218633290.553468
tv vbl:           1218633290.538542
vbls:                         43163
trim:                    0x00520100
sync point displacement:       9871   
drift speed:                    -19 
overall compensation:           339 
o. c. clipped:                  339
trim absolute:                  339
t. a. clipped:                   37
new trim:                0x80520125

tv now:           1218633291.553497
tv vbl:           1218633291.539525
vbls:                         43213
trim:                    0x00520125
sync point displacement:       3972
drift speed:                   -954 
overall compensation:           104 
o. c. clipped:                  104
trim absolute:                  141
t. a. clipped:                   37
new trim:                0x80520125

tv now:           1218633292.553471
tv vbl:           1218633292.540529
vbls:                         43263
trim:                    0x00520125
sync point displacement:       2942
drift speed:                  -1030 
overall compensation:            65 
o. c. clipped:                   65
trim absolute:                  102
t. a. clipped:                   37
new trim:                0x80520125

tv now:           1218633293.553429
tv vbl:           1218633293.541534
vbls:                         43313
trim:                    0x00520125
sync point displacement:       1895
drift speed:                  -1047 
overall compensation:            29 
o. c. clipped:                   29
trim absolute:                   66
t. a. clipped:                   37
new trim:                0x80520125

tv now:           1218633294.553387
tv vbl:           1218633294.542539
vbls:                         43363
trim:                    0x00520125
sync point displacement:        848
drift speed:                  -1047 
overall compensation:            -6 
o. c. clipped:                   -6
trim absolute:                   31
t. a. clipped:                   31
new trim:                0x8052011f

tv now:           1218633295.553358
tv vbl:           1218633295.543374
vbls:                         43413
trim:                    0x0052011f
sync point displacement:        -16
drift speed:                   -864 
overall compensation:           -30 
o. c. clipped:                  -30
trim absolute:                    1
t. a. clipped:                    1
new trim:                0x80520101

tv now:           1218633296.553329
tv vbl:           1218633296.543358
vbls:                         43463
trim:                    0x00520101
sync point displacement:        -29
drift speed:                    -13 
overall compensation:            -1 completed
o. c. clipped:                   -1
trim absolute:                    0
t. a. clipped:                    0
new trim:                0x80520100

tv now:           1218633297.553298
tv vbl:           1218633297.543296
vbls:                         43513
trim:                    0x00520100
sync point displacement:          2
drift speed:                     31 
overall compensation:             1 completed
o. c. clipped:                    1
trim absolute:                    1
t. a. clipped:                    1
new trim:                0x80520101

tv now:           1218633298.553269
tv vbl:           1218633298.543262
vbls:                         43563
trim:                    0x00520101
sync point displacement:          7
drift speed:                      5 
overall compensation:             0 completed
o. c. clipped:                    0
trim absolute:                    1
t. a. clipped:                    1
new trim:                0x80520101

it is important after some time 'sync point displacement' and 'drift speed'
are floating around zero.

4. stop drift_control 
5. unload all dvb modules (there are known issues with some)
6. start vdr with local sxfe frontend (make channels.conf zero size file)
7. start replay of some recording. Because field polarity is not synced
automatically anymore you can manually restart replay until polarity is 
correct.

this should give you typically an output (Xorg.0.log) like this:

sync point displacement:      -7816
drift speed:                   -716
overall compensation:          -294 
sync point displacement:      -7503
drift speed:                    832
overall compensation:          -230
sync point displacement:      -6514
drift speed:                   1293
overall compensation:          -180 
sync point displacement:      -5394
drift speed:                    906
overall compensation:          -154 
sync point displacement:      -4261
drift speed:                   1226
overall compensation:          -104 
sync point displacement:      -3142
drift speed:                   1154
overall compensation:           -68 
sync point displacement:      -2006
drift speed:                   1034
overall compensation:           -33
sync point displacement:       -875
drift speed:                   1218
overall compensation:            11 
sync point displacement:         89
drift speed:                    796
overall compensation:            30 
sync point displacement:        470
drift speed:                    -75
overall compensation:            13
sync point displacement:        235
drift speed:                   -391
overall compensation:            -5
sync point displacement:       -127
drift speed:                   -258
overall compensation:           -13
sync point displacement:       -230
drift speed:                     55
overall compensation:            -6
sync point displacement:        -38
drift speed:                    271
overall compensation:             8
sync point displacement:         99
drift speed:                     43
overall compensation:             4
sync point displacement:         93
drift speed:                    -62
overall compensation:             1 completed
sync point displacement:        -15
drift speed:                   -107
overall compensation:            -4 
sync point displacement:        -58
drift speed:                     -2
overall compensation:            -2
sync point displacement:        -30
drift speed:                     41
overall compensation:             0 completed
sync point displacement:         23
drift speed:                    -27
overall compensation:             0 completed

again as in our previous example with drift_control the value of 'sync
point displacement' starts mostly at very high offset. The algorithm
uses 'drift speed' to converge 'sync point displacement' against zero.
After a few cycles even an 'overall compensation' of 0 is possible.

The picture quality should be as good as accustomed from RGB/PAL CRT.

If the 'drift speed' value finally does show large deviations from zero this
could be a problem in your xine-lib. In that case I upload my current xine-lib
version to my web server.

Good luck
   Thomas
  
thomas Aug. 13, 2008, 2:54 p.m. UTC | #55
On Wed, Aug 13, 2008 at 12:03:23AM +0400, Goga777 wrote:
> does your project actually fro dvb 720p channels too ?

720p is 1280x720. At least the part that does the frame rate sync VGA<->DVB
can be recycled for this. 

> and I have may be stupid question - why frame rate from satellites is not stable ? is it 50i ?

real life systems don't work 100% perfect in mathematical sense. That's why
we must find a way how to compensate for aberrations. BTW that's exactly how
a FF-card it also does.

Cheers
   Thomas
  
Gavin Hamill Aug. 13, 2008, 3:21 p.m. UTC | #56
On Wed, 2008-08-13 at 16:39 +0200, Thomas Hilber wrote:
1. for the moment please encomment both in 'radeon_video.c' and 'drift_control'
> 
> //#define RESYNC_FIELD_POLARITY_METHOD1
> //#define RESYNC_FIELD_POLARITY_METHOD2

Done, recompiled + reinstall the .deb, and recompiled the drift_control
binary..

> 3. run 'drift_control a'

> it is important after some time 'sync point displacement' and 'drift speed'
> are floating around zero.

overall comp floats  -1 to 2, but sync point floats -44 to +45, and
drift speed floats -40 to +40. ta absolute + clipped are 5 -> 7. I find
it odd that my 'vbls' value is 15500 when yours is 43000..

> 4. stop drift_control 
> 5. unload all dvb modules (there are known issues with some)

I forgot to mention this before - I'm not using any dvb modules - I'm
using the streamdev-client to source live TV via HTTP from my live VDR
box.

I'll have to wait until I can be in front of the machine to try the
other tests.. will follow-up then.

Many thanks for your time and effort :)

Cheers,
Gavin.
  
thomas Aug. 13, 2008, 4:14 p.m. UTC | #57
On Wed, Aug 13, 2008 at 04:21:25PM +0100, Gavin Hamill wrote:
> overall comp floats  -1 to 2, but sync point floats -44 to +45, and
> drift speed floats -40 to +40. ta absolute + clipped are 5 -> 7. I find

that's pretty good. Same values here.

> it odd that my 'vbls' value is 15500 when yours is 43000..

no - that's also ok! vbls continuously counts vbl interrupts since Xserver
start. Next time you restart you will have again a completely
different offset.

> > 4. stop drift_control 
> > 5. unload all dvb modules (there are known issues with some)
> 
> I forgot to mention this before - I'm not using any dvb modules - I'm
> using the streamdev-client to source live TV via HTTP from my live VDR
> box.

ok. There still is left to do a lot of things until the patch is working on 
all possible configurations. Sorry - I for now only can speak for my own
(simple) configuration.

> I'll have to wait until I can be in front of the machine to try the
> other tests.. will follow-up then.
> 
> Many thanks for your time and effort :)

thank you for testing:-)

Cheers
   Thomas
  
thomas Aug. 13, 2008, 4:40 p.m. UTC | #58
On Mon, Aug 11, 2008 at 07:40:15PM +0300, Jouni Karvo wrote:
> with NVIDIA driver 169 and 173 at least, this does not yet work:

I cannot confirm that. I just downloaded and installed most recent

NVIDIA-Linux-x86-173.14.12.pkg1.run

It's running perfectly VGA->SCART with *exactly* the xorg.conf I posted above.

Cheers
   Thomas
  
thomas Aug. 13, 2008, 5:08 p.m. UTC | #59
On Tue, Aug 12, 2008 at 04:44:59PM +0300, Pasi Kärkkäinen wrote:
> that made me wonder if it would be possible to make these patches friendly
> enough to get them accepted upstream :) 

sorry - but I really can't care about this at the current state of
development
  
Gavin Hamill Aug. 13, 2008, 8:09 p.m. UTC | #60
On Wed, 2008-08-13 at 16:39 +0200, Thomas Hilber wrote:

And now.. part 2 :)

> 4. stop drift_control 
> 5. unload all dvb modules (there are known issues with some)
> 6. start vdr with local sxfe frontend (make channels.conf zero size file)
> 7. start replay of some recording. Because field polarity is not synced
> automatically anymore you can manually restart replay until polarity is 
> correct.
> 

OK, found a suitable recording, and after a couple of 'play 1 begin' to
SVDRP it starts OK. The picture is pretty good, but it still shifts
around the screen a bit:

drift speed:                    982 
overall compensation:            30 
sync point displacement:       1118
drift speed:                    422 
overall compensation:            53 
sync point displacement:         64
drift speed:                   -916 
overall compensation:           -29 
sync point displacement:        613
drift speed:                   -282 
overall compensation:            11 
sync point displacement:       -216
drift speed:                   -369 
overall compensation:           -20 
sync point displacement:       -499
drift speed:                    163 
overall compensation:           -11 
sync point displacement:       -685
drift speed:                    393 
overall compensation:           -10 
sync point displacement:       3175
drift speed:                   8509 
overall compensation:           402 
sync point displacement:       6186
drift speed:                 -13504 excessive drift speed
overall compensation:          -252 
sync point displacement:       -846
drift speed:                  -4863 
overall compensation:          -196 
sync point displacement:       -430
drift speed:                   7566 
overall compensation:           246 
sync point displacement:       -438
drift speed:                  -8988 

So, wow yes it's still all over the place :(

FWIW, the output is not once per second.. there is often a delay of up
to 4 seconds before another group of 3 lines is displayed.

> If the 'drift speed' value finally does show large deviations from zero this
> could be a problem in your xine-lib. In that case I upload my current xine-lib
> version to my web server.

Ouch. 

I tried first with the xine-lib 1.11.1 which ships with ubuntu hardy,
and then forward-ported the 1.1.7 packages from gutsy (whilst
repackaging the xineliboutput support for 1.1.7) and had exactly the
same problem. I just find it a bit strange that the same problem should
manifest itself with both an older and a newer xine-lib than you use
(listed as 1.1.8 in your original post)

So, I started looking for other reasons. Whilst X + vdr are running, the
Xorg process is taking 40% CPU, with vdr taking 25%. The 'system' CPU
usage is 32%, with 16% for user processes. I thought maybe it was using
X11 output rather than xv, and thus causing a drain on the system...

I have executed 'xhost +' to eliminate X security issues... and the
syslog shows all positive output:

 starting plugin: xineliboutput
 Local decoder/display (cXinelibThread) thread started (pid=14236,
tid=14242)
 [xine..put] xineliboutput: plugin file
is /usr/lib/vdr/plugins/libvdr-xineliboutput.so.1.6.0
 [xine..put] Searching frontend sxfe from /usr/lib/vdr/plugins/
 [xine..put]
Probing /usr/lib/vdr/plugins/libxineliboutput-sxfe.so.1.0.0rc2
 [xine..put] load_frontend: entry at 0xb569a154
 [xine..put] Using frontend sxfe (X11 (sxfe)) from
libxineliboutput-sxfe.so.1.0.0rc2
 [xine..put] cXinelibLocal::Action - fe created
 [vdr-fe]    sxfe_display_open(width=720, height=576, fullscreen=1,
display=:0)
 [vdr-fe]    Display size : 190 x 152 mm
 [vdr-fe]                   720 x 576 pixels
 [vdr-fe]                   96dpi / 96dpi
 [vdr-fe]    Display ratio: 3789.000000/3789.000000 = 1.000000
 [vdr-fe]    Failed to open connection to bus: Failed to execute
dbus-launch to autolaunch D-Bus session
 [vdr-fe]       (ERROR (gnome_screensaver.c,55): Resource temporarily
unavailable)
 [xine..put] cXinelibLocal::Action - fe->fe_display_open ok
 [xine..put] cXinelibLocal::Action - xine_init
 [vdr-fe]    fe_xine_init: xine_open_audio_driver("alsa:default") failed
 [xine..put] cXinelibLocal::Action - fe->xine_init ok
 [xine..put] cXinelibLocal::Action - xine_open

'xvinfo' shows all the good stuff (pages of capabilities), too.

So I'm not entirely sure where to take it from here. Clearly it can
work, but I must be missing a piece..

Sorry - it's a bit of a mixed bag response - I was hoping it would be
much more clear cut!

Cheers,
Gavin.
  
Jouni Karvo Aug. 14, 2008, 5:50 a.m. UTC | #61
hi,

Thomas Hilber wrote:
> On Mon, Aug 11, 2008 at 07:40:15PM +0300, Jouni Karvo wrote:
>   
>> with NVIDIA driver 169 and 173 at least, this does not yet work:
>>     
>
> I cannot confirm that. I just downloaded and installed most recent
>
> NVIDIA-Linux-x86-173.14.12.pkg1.run
>
> It's running perfectly VGA->SCART with *exactly* the xorg.conf I posted above.
>   

your trick is the VGA->SCART cable.  I was using the TVout from the 
card.  I have ordered the components for the cable, and I hope I'll be 
able to solder them together during the weekend.  I hope I can then 
reproduce your success :)

yours,
       Jouni
  
thomas Aug. 14, 2008, 7:09 a.m. UTC | #62
On Thu, Aug 14, 2008 at 08:50:43AM +0300, Jouni Karvo wrote:
> your trick is the VGA->SCART cable.  I was using the TVout from the 
> card.  I have ordered the components for the cable, and I hope I'll be 
> able to solder them together during the weekend.  I hope I can then 
> reproduce your success :)

ok, I'm sure you will! The picture quality of SCART is way better than
TVout. Though there still must be deinterlaced on nVidia.

Cheers
   Thomas
  
thomas Aug. 14, 2008, 9:25 a.m. UTC | #63
On Wed, Aug 13, 2008 at 09:09:45PM +0100, Gavin Hamill wrote:
> OK, found a suitable recording, and after a couple of 'play 1 begin' to
> SVDRP it starts OK. The picture is pretty good, but it still shifts
> around the screen a bit:

that's because PLL encounters very large increments of 'sync point
displacement' it must compensate for. We must find the cause where these
leaps come from. 

BTW:
If I start a kernel build in the background I get similar effects:)

In your case it appears to be the Xserver itself consuming huge amounts
of CPU resources for some yet unknown reason (see below).

[...]
> overall compensation:           -11 
> sync point displacement:       -685  <------+
> drift speed:                    393         |
> overall compensation:           -10         | there is no
> sync point displacement:       3175  <------+ stability
> drift speed:                   8509         |
> overall compensation:           402         |
> sync point displacement:       6186  <------+
[...]

> FWIW, the output is not once per second.. there is often a delay of up
> to 4 seconds before another group of 3 lines is displayed.

strange. The output here gets exactly every second directly to the screen. And
also is updated every second through 'tail -F /var/log/Xorg.0.log'.
I think that could fit to the '40% Xserver-CPU' phenomenon (see below).

> So, I started looking for other reasons. Whilst X + vdr are running, the
> Xorg process is taking 40% CPU, with vdr taking 25%. The 'system' CPU
> usage is 32%, with 16% for user processes. I thought maybe it was using
> X11 output rather than xv, and thus causing a drain on the system...

oh - a very interesting fact.
that's different to mine (see my output of top below). Xorg takes only 0.7%(!)
CPU on my system. Are there some special patches in ubuntu that causes
this?

This appears be the root cause of our problem!

Does the Xserver poll for some resources not available or something?
A value of 40% CPU is way too much. The only process consuming some CPU
power should be 'vdr' whilst decoding. Most other processes don't have
to do much all over the time.
We must dig deeper into that '40% Xserver-CPU' phenomenon! 
DISPLAY environment variable is set to DISPLAY=:0 ?

again a typical Xserver output:

sync point displacement:        -26
drift speed:                    -71 
overall compensation:            -3 
sync point displacement:        -31
drift speed:                    100 
overall compensation:             2 
sync point displacement:        -25
drift speed:                    -57 
overall compensation:            -2 
sync point displacement:        -23
drift speed:                     12 
overall compensation:             0 completed
sync point displacement:         24
drift speed:                     63 
overall compensation:             3 
sync point displacement:          6
drift speed:                    -72 
overall compensation:            -2 
sync point displacement:        -10
drift speed:                    -24 
overall compensation:            -1 completed
sync point displacement:          5
drift speed:                     60 
overall compensation:             2 

while at the same time you get these messages in '/var/log/messages'.
You see the correction is only floating a little:

kernel: [drm] changed radeon drift trim from 00520101 -> 00520105
kernel: [drm] changed radeon drift trim from 00520105 -> 00520104
kernel: [drm] changed radeon drift trim from 00520104 -> 00520101
kernel: [drm] changed radeon drift trim from 00520101 -> 00520103
kernel: [drm] changed radeon drift trim from 00520103 -> 00520101
kernel: [drm] changed radeon drift trim from 00520101 -> 00520104
kernel: [drm] changed radeon drift trim from 00520104 -> 00520102
kernel: [drm] changed radeon drift trim from 00520102 -> 00520101
kernel: [drm] changed radeon drift trim from 00520101 -> 00520103

at the same time I get following values through 'vmstat 1':

procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa
 0  0      0 349304  11228 107228    0    0  1788     0  286  777 22  1 77  0
 0  0      0 349296  11228 107216    0    0     0     0  294  787 22  0 78  0
 0  0      0 347364  11232 109012    0    0  1804     0  299  780 22  2 76  0
 0  0      0 347364  11232 109024    0    0     0     0  281  767 23  1 76  0
 0  0      0 345572  11232 110824    0    0  1796     0  300  782 24  0 76  0
 0  0      0 345564  11240 110820    0    0     0    72  295  799 24  1 75  0
 0  0      0 343896  11240 112596    0    0  1800     0  294  781 24  1 75  0
 0  0      0 343896  11240 112596    0    0     0     0  287  776 25  0 75  0
 0  0      0 342104  11240 114396    0    0  1808     0  293  781 24  2 74  0
 0  0      0 342096  11240 114404    0    0     0    20  291  780 26  1 73  0
 0  0      0 340304  11248 116196    0    0  1800    56  307  779 25  1 74  0
 0  0      0 340296  11248 116204    0    0     0     0  281  768 24  2 74  0
 0  0      0 338504  11248 118004    0    0  1788     0  285  764 21  4 75  0
 0  0      0 338512  11248 117992    0    0     0     0  283  745 27  0 73  0
 0  0      0 344776  11248 111580    0    0  1788     4  300  775 23  2 75  0

and top:

top - 10:48:13 up  1:33,  8 users,  load average: 0.22, 0.09, 0.02
Tasks:  58 total,   2 running,  56 sleeping,   0 stopped,   0 zombie
Cpu(s):  1.0%us,  0.3%sy, 24.3%ni, 74.3%id,  0.0%wa,  0.0%hi,  0.0%si, 0.0%st
Mem:    516368k total,   173356k used,   343012k free,    11416k buffers
Swap:  3903784k total,        0k used,  3903784k free,   113200k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND     
 8858 root      20   0  149m  25m  14m S 16.0  5.0   0:31.03 vdr        
 8798 root      20   0  294m  15m  12m S  0.7  3.1   0:01.54 Xorg      
 8894 root      20   0  2316 1096  872 R  0.7  0.2   0:00.28 top      
    1 root      20   0  2028  708  604 S  0.0  0.1   0:01.26 init    
    2 root      15  -5     0    0    0 S  0.0  0.0   0:00.00 kthreadd     
    3 root      RT  -5     0    0    0 S  0.0  0.0   0:00.00 migration/0   
    4 root      15  -5     0    0    0 S  0.0  0.0   0:00.00 ksoftirqd/0  

You see Xorg is almost not noticable on my system!

Can you strace the Xserver? Maybe you can try Debian experimental packages
like I do? Don't the run on Ubuntu as well?

If it would help you I can offer you to make a copy of my entire development
system (about 800MB as compressed tar image). It's based on actual Debian
lenny. This way you would have a system running instantly as expected.
>From there you could try to stepwise activate your additional components.
And you have a chance to see which components causes the failure.

Cheers
   Thomas
  
Jouni Karvo Aug. 14, 2008, 9:40 a.m. UTC | #64
> On Wed, Aug 13, 2008 at 09:09:45PM +0100, Gavin Hamill wrote:
>   
>
>> So, I started looking for other reasons. Whilst X + vdr are running, the
>> Xorg process is taking 40% CPU, with vdr taking 25%. The 'system' CPU
>> usage is 32%, with 16% for user processes. I thought maybe it was using
>> X11 output rather than xv, and thus causing a drain on the system...
>>     
>
>   

Have you checked that your display driver is OK?  MTRR?  Are you sure 
you use e.g. XV and not XShm?

Also, VDR taking 25% of resources looks pretty high.  Can you check 
without plugins?  (or is the 25% already including a software player?)

yours,
       Jouni
  
Petri Hintukainen Aug. 14, 2008, 10:15 a.m. UTC | #65
to, 2008-08-14 kello 11:25 +0200, Thomas Hilber kirjoitti:
> On Wed, Aug 13, 2008 at 09:09:45PM +0100, Gavin Hamill wrote:
> > Xorg process is taking 40% CPU, with vdr taking 25%. The 'system' CPU
> > usage is 32%, with 16% for user processes. 
[...]
> Does the Xserver poll for some resources not available or something?

Maybe the driver is waiting for free overlay buffer ? Some drivers wait
for free hardware overlay buffer in simple busy loop.

Usually this can be seen only when video player draws Xv frames faster
than the actual output rate (ex. displaying 50p video with 50p display
mode).


- Petri
  
thomas Aug. 14, 2008, 11:53 a.m. UTC | #66
On Thu, Aug 14, 2008 at 01:15:58PM +0300, Petri Hintukainen wrote:
> to, 2008-08-14 kello 11:25 +0200, Thomas Hilber kirjoitti:
> > Does the Xserver poll for some resources not available or something?
> 
> Maybe the driver is waiting for free overlay buffer ? Some drivers wait
> for free hardware overlay buffer in simple busy loop.

a good idea, but in the case of 'xserver-xorg-video-ati' true hardware
double buffers are supported. If a new PutImage() comes in the DDX simply
toggles to the other double buffer and starts to write there. No matter
this buffer ever has been completely read by CRT controller.

So there is no mechanism waiting here for something as far as I can see.

> Usually this can be seen only when video player draws Xv frames faster
> than the actual output rate (ex. displaying 50p video with 50p display
> mode).

To see the effect in practice I just set the VGA frame rate to several values
slightly below 50Hz (i.e. in the range 49.94 - 49.99HZ). Applying all
these 'low' frame rates lead to dropped fields as expected.
But Xserver %CPU always stays around 2%CPU maximum. 

The only exception here is if I press the 'OK' button. The OSD 'time-shift' 
bar showing up then costs about 16%CPU. Strangely enough if I open the 
'recordings' OSD which covers almost the entire screen this takes only 
about 6%CPU.

BTW:
my xineliboutput OSD setup is as follows:

xineliboutput.OSD.AlphaCorrection = 0
xineliboutput.OSD.AlphaCorrectionAbs = 0
xineliboutput.OSD.Downscale = 1
xineliboutput.OSD.ExtSubSize = -1
xineliboutput.OSD.HideMainMenu = 0
xineliboutput.OSD.LayersVisible = 0
xineliboutput.OSD.Prescale = 1
xineliboutput.OSD.SpuAutoSelect = 0
xineliboutput.OSD.UnscaledAlways = 0
xineliboutput.OSD.UnscaledLowRes = 0
xineliboutput.OSD.UnscaledOpaque = 0

But anyway all these values still are in the 'green area' and are 
compensated by the patch.

A value of 40%CPU as Gavin posted above I never could reproduce on my system.
There must be broken something. 

Cheers
   Thomas
  
Torgeir Veimo Aug. 14, 2008, 12:22 p.m. UTC | #67
On 14 Aug 2008, at 21:53, Thomas Hilber wrote:

> a good idea, but in the case of 'xserver-xorg-video-ati' true hardware
> double buffers are supported. If a new PutImage() comes in the DDX  
> simply
> toggles to the other double buffer and starts to write there. No  
> matter
> this buffer ever has been completely read by CRT controller.
>
> So there is no mechanism waiting here for something as far as I can  
> see.


Since you're using the vsync irq in any case, the best solution would  
be to notify user space at irq time that it should 'PutImage' a new  
frame.
  
thomas Aug. 14, 2008, 12:47 p.m. UTC | #68
On Thu, Aug 14, 2008 at 10:22:53PM +1000, Torgeir Veimo wrote:
> Since you're using the vsync irq in any case, the best solution would  
> be to notify user space at irq time that it should 'PutImage' a new  
> frame.

I know what you want to say. But according to my understanding xine has 
it's own heart beat and from this and stream PTS and some other parameters 
there is finally derived the rate the images are put to the Xserver.

I consider this rate as an 'ideal' rate i.e. free from any hardware 
contraints. I really don't want to change that.

Rather I try my best to program the hardware as close as possible to xines
'ideal' rate. That's the main intention of the patch.

Cheers
  Thomas
  
Gavin Hamill Aug. 14, 2008, 1:22 p.m. UTC | #69
On Thu, 2008-08-14 at 11:25 +0200, Thomas Hilber wrote:

Good heavens, this is all getting rather heavyweight :)

> oh - a very interesting fact.
> that's different to mine (see my output of top below). Xorg takes only 0.7%(!)
> CPU on my system. Are there some special patches in ubuntu that causes
> this?

1% CPU is about what I would expect for xv usage - after all the whole
point is for the app to write directly to video memory with minimal
'processing'

A skirt around the problem with Google reveals very little - only a
string of users complaining that their silly 3D desktop is slow /
unstable (who would have thought? :)

> This appears be the root cause of our problem!
> 
> Does the Xserver poll for some resources not available or something?
> A value of 40% CPU is way too much. The only process consuming some CPU
> power should be 'vdr' whilst decoding. Most other processes don't have
> to do much all over the time.

It should be said that Xorg is idle when just showing a desktop. It's
only when video is played that usage shoots up.

> We must dig deeper into that '40% Xserver-CPU' phenomenon! 
> DISPLAY environment variable is set to DISPLAY=:0 ?

Yes. I tried also using mplayer -vo xv /video/blahhhh/12313131/001.vdr
and that also generated the same amount of load in Xorg. However, since
the PC (Dell Optiplex) has onboard Intel 810 VGA, I removed the radeon
and tried it. The same mplayer test yielded only 6% Xorg CPU. Still
higher than I would expect, but it was an 800x600 VGA display. 

Even deleting the xorg.conf and letting the radeon driver choose 'best
defaults' I get the 40% CPU load.

> You see Xorg is almost not noticable on my system!
> 
> Can you strace the Xserver? Maybe you can try Debian experimental packages
> like I do? Don't the run on Ubuntu as well?

Well, the Debian experimental packages installed OK, but refused to
start:

/usr/bin/X11/X: symbol lookup
error: /usr/lib/xorg/modules/drivers//radeon_drv.so: undefined symbol:
pci_device_map_range
giving up.
xinit:  Connection refused (errno 111):  unable to connect to X server
xinit:  No such process (errno 3):  unexpected signal 2.

(yes, the radeon driver package was upgraded to the experimental one :)

and now I am unable to reinstall the ubuntu xorg due to circular
dependencies and very strange package behaviour (see [1]), so I've given
up on this installation. A shame, since I'd done well and not installed
anything into /usr/local this time :)

> If it would help you I can offer you to make a copy of my entire development
> system (about 800MB as compressed tar image). 

At this stage that sounds like a good idea. I originally intended to
install lenny but the Debian netinst + 'testing2' iso claimed there was
no hard disk on the PC (I had the same experience earlier that day with
a server at work), so I tried Ubuntu which installed perfectly. 

Are you suggesting to provide a tarball that I can 'tar xzf' into a
freshly-formatted root partition (then run grub) ?

Cheers,
Gavin.

[1] root@rgb:~# apt-get install xserver-xorg xserver-xorg-core
The following packages have unmet dependencies.
  xserver-xorg: Depends: x11-xkb-utils but it is not going to be
installed
                  PreDepends: x11-common (>= 1:7.3+3) but it is not
going to be installed
  xserver-xorg-core: Depends: libfontenc1 but it is not going to be
installed
                     Depends: libxau6 but it is not going to be
installed
                     Depends: libxdmcp6 but it is not going to be
installed
                     Depends: libxfont1 (>= 1:1.2.9) but it is not going
to be installed
                     Depends: x11-common (>= 1:7.0.0) but it is not
going to be installed

All the Depends: packages are /already/ installed and meet those version
requirements!
  
thomas Aug. 14, 2008, 1:34 p.m. UTC | #70
On Thu, Aug 14, 2008 at 02:22:46PM +0100, Gavin Hamill wrote:
> [1] root@rgb:~# apt-get install xserver-xorg xserver-xorg-core
> The following packages have unmet dependencies.
>   xserver-xorg: Depends: x11-xkb-utils but it is not going to be
> installed
>                   PreDepends: x11-common (>= 1:7.3+3) but it is not
> going to be installed
>   xserver-xorg-core: Depends: libfontenc1 but it is not going to be
> installed
>                      Depends: libxau6 but it is not going to be
> installed
>                      Depends: libxdmcp6 but it is not going to be
> installed
>                      Depends: libxfont1 (>= 1:1.2.9) but it is not going
> to be installed
>                      Depends: x11-common (>= 1:7.0.0) but it is not
> going to be installed
> 
> All the Depends: packages are /already/ installed and meet those version
> requirements!

may be a forced purge/uninstall and reinstall of consistent Debian packages
would help?

> Are you suggesting to provide a tarball that I can 'tar xzf' into a
> freshly-formatted root partition (then run grub) ?

right. I would prepare it the next hours leave you a message with
the URL where to download.

Cheers
   Thomas
  
Gavin Hamill Aug. 14, 2008, 1:35 p.m. UTC | #71
On Thu, 2008-08-14 at 14:22 +0100, Gavin Hamill wrote:
> On Thu, 2008-08-14 at 11:25 +0200, Thomas Hilber wrote:
> 

Oh, aptitude solved the dependencies for me (needed to explicitly
downgrade one package, then all was well.)

Here's the "vmstat 1" output during mplayer playback... i.e. no madness
with interrupts / context switching...

procs -----------memory---------- ---swap-- -----io---- -system--
----cpu----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy
id wa
 2  0  39684   4616  13172 186380    0    0   512     0  259  128 41 31
29  0
 1  0  39684   4224  13172 186764    0    0   384     0  252  119 38 31
30  0
 2  0  39684   3860  13176 187144    0    0   384     4  262  131 42 30
28  0
 1  0  39684   4340  12820 187228    0  628   388   628  266  131 40 31
30  0


gdh
  
Gavin Hamill Aug. 16, 2008, 11:41 a.m. UTC | #72
Hi all,

Over the last days, Thomas and I have been trying to sort out why my 
nearly-identical machine couldn't run his VGA sync patches properly.

The key difference is my Radeon 7000VE is PCI, whilst his is AGP. I 
tried the PCI Radeon in two old Pentium-3 era machines, and on my modern 
Pentium D930 desktop, all with the same behaviour - fullscreen video 
over PCI causes huge CPU usage in the Xorg process, even when using xv 
'acceleration'.

When I switch the PCI Radeon for a PCI Express X300 (the very lowest 'X' 
series you can get), everything is glorious: Xorg CPU use is barely 1%.

Unfortunately I don't have any machines with both AGP and PCI on which 
I can try the same OS image but we both think it's safe to conclude that 
PCI is just unsuitable for this task.

Many thanks to Thomas for writing the patches in the first place, and 
also for the time he's spent logged into my machine remotely trying to 
solve the problem!

Cheers,
Gavin.
  
Theunis Potgieter Aug. 16, 2008, 12:13 p.m. UTC | #73
Would be nice if someone could test the AMD Athlon 64 2000+ on a AMD
platform, the 780G chip set on a microATX board, because it can do HD
resolution (1920x1200) with high picture quality is possible through
DVI/HDMI ports.

On 16/08/2008, Gavin Hamill <gdh@acentral.co.uk> wrote:
>
> Hi all,
>
> Over the last days, Thomas and I have been trying to sort out why my
> nearly-identical machine couldn't run his VGA sync patches properly.
>
> The key difference is my Radeon 7000VE is PCI, whilst his is AGP. I
> tried the PCI Radeon in two old Pentium-3 era machines, and on my modern
> Pentium D930 desktop, all with the same behaviour - fullscreen video
> over PCI causes huge CPU usage in the Xorg process, even when using xv
> 'acceleration'.
>
> When I switch the PCI Radeon for a PCI Express X300 (the very lowest 'X'
> series you can get), everything is glorious: Xorg CPU use is barely 1%.
>
> Unfortunately I don't have any machines with both AGP and PCI on which
> I can try the same OS image but we both think it's safe to conclude that
> PCI is just unsuitable for this task.
>
> Many thanks to Thomas for writing the patches in the first place, and
> also for the time he's spent logged into my machine remotely trying to
> solve the problem!
>
> Cheers,
> Gavin.
>
>
> _______________________________________________
> vdr mailing list
> vdr@linuxtv.org
> http://www.linuxtv.org/cgi-bin/mailman/listinfo/vdr
>
  
Pertti Kosunen Aug. 16, 2008, 1:12 p.m. UTC | #74
Theunis Potgieter wrote:
> Would be nice if someone could test the AMD Athlon 64 2000+ on a AMD 
> platform, the 780G chip set on a microATX board, because it can do HD 
> resolution (1920x1200) with high picture quality is possible through 
> DVI/HDMI ports.

Hardware decoding of H.264 isn't supported in linux. Software decoding 
even 720p H.264 might be impossible with 1 GHz processor.
  
Artur Skawina Aug. 17, 2008, 1:41 a.m. UTC | #75
Gavin Hamill wrote:
> Over the last days, Thomas and I have been trying to sort out why my 
> nearly-identical machine couldn't run his VGA sync patches properly.
> 
> The key difference is my Radeon 7000VE is PCI, whilst his is AGP. I 
> tried the PCI Radeon in two old Pentium-3 era machines, and on my modern 
> Pentium D930 desktop, all with the same behaviour - fullscreen video 
> over PCI causes huge CPU usage in the Xorg process, even when using xv 
> 'acceleration'.
> 
> When I switch the PCI Radeon for a PCI Express X300 (the very lowest 'X' 
> series you can get), everything is glorious: Xorg CPU use is barely 1%.
> 
> Unfortunately I don't have any machines with both AGP and PCI on which 
> I can try the same OS image but we both think it's safe to conclude that 
> PCI is just unsuitable for this task.

PCI in general should be perfectly fine, for SDTV at least. 
While displaying SDTV (vdrsxfe) I see ~20% cpu use for X on AGP, ~44% on PCI
(same machine, different heads, AGP is MGA450, PCI is MGA200).

The huge difference is likely due to something else, like
- display (X) driver (but even drivers which just memcpy the video
   data to the (xv) framebuffer should work on a modern machine)
- PCI chipset (eg I had a VIA-based mobo, and it couldn't even keep up
   with SDTV on a PCI head, swapping the mobo for one w/ a real chipset
   made all problems suddenly disappear...)

You could probably do some setpci tweaks to improve PCI throughput, but
I doubt the gain would be enough (I'd expect 10% improvement or so).

artur
  
Gavin Hamill Aug. 17, 2008, 3:31 p.m. UTC | #76
On Sun, 2008-08-17 at 03:41 +0200, Artur Skawina wrote:

> PCI in general should be perfectly fine, for SDTV at least. 
> While displaying SDTV (vdrsxfe) I see ~20% cpu use for X on AGP, ~44% on PCI
> (same machine, different heads, AGP is MGA450, PCI is MGA200).

Yes, 40% CPU has been what I've seen. The problem is that it's system
CPU usage rather than userspace. Due to the critical timing nature of
the patches, they need to have nearly the whole machine to themselves,
thus DMA PCI overhead causing things to be a 'a bit sticky' is just too
much :/

> You could probably do some setpci tweaks to improve PCI throughput, but
> I doubt the gain would be enough (I'd expect 10% improvement or so).

I did try to twiddle with setting PCI latency timers but it had no
measurable effect..

Cheers,
Gavin.
  
thomas Aug. 17, 2008, 4:25 p.m. UTC | #77
On Sun, Aug 17, 2008 at 04:31:58PM +0100, Gavin Hamill wrote:
> CPU usage rather than userspace. Due to the critical timing nature of
> the patches, they need to have nearly the whole machine to themselves,

the patches are time critical as far as xine itself must time the
frames very accurately.

Even my old 800Mhz Pentium with AGP-Radeon shows that indeed every 
40000usecs +-35usecs a frame comes to Xserver's PutImage(). 

It's by far not neccessary for the patches to work to get frames that
accurately but it shows what is possible even on old and slow hardware.

On Gavin's machine with PCI DMA problems we instead timed
40000usecs +-21000usecs a frame comes to the Xserver's PutImage().

That is way too unstable. I think xine itself also can't cope with that.
At least it will show heavy jerkyness.

Nonetheless I today released a new version of the patches with 100% 
lesser sensivity to timing problems (see announcement of today).

Cheers
   Thomas
  
Theunis Potgieter Aug. 27, 2008, 1:08 p.m. UTC | #78
I found this to be useful for me, however I'm using PAL@50Hz and not NTSC
colour encoding.

http://www.linuxis.us/linux/media/howto/linux-htpc/video_card_configuration.html

Nice background information.

On 17/08/2008, Thomas Hilber <vdr@toh.cx> wrote:
>
> On Sun, Aug 17, 2008 at 04:31:58PM +0100, Gavin Hamill wrote:
> > CPU usage rather than userspace. Due to the critical timing nature of
> > the patches, they need to have nearly the whole machine to themselves,
>
>
> the patches are time critical as far as xine itself must time the
> frames very accurately.
>
> Even my old 800Mhz Pentium with AGP-Radeon shows that indeed every
> 40000usecs +-35usecs a frame comes to Xserver's PutImage().
>
> It's by far not neccessary for the patches to work to get frames that
> accurately but it shows what is possible even on old and slow hardware.
>
> On Gavin's machine with PCI DMA problems we instead timed
> 40000usecs +-21000usecs a frame comes to the Xserver's PutImage().
>
> That is way too unstable. I think xine itself also can't cope with that.
> At least it will show heavy jerkyness.
>
> Nonetheless I today released a new version of the patches with 100%
> lesser sensivity to timing problems (see announcement of today).
>
> Cheers
>
>    Thomas
>
>
>
> _______________________________________________
> vdr mailing list
> vdr@linuxtv.org
> http://www.linuxtv.org/cgi-bin/mailman/listinfo/vdr
>
  
thomas Aug. 27, 2008, 5:43 p.m. UTC | #79
On Wed, Aug 27, 2008 at 03:08:14PM +0200, Theunis Potgieter wrote:
> I found this to be useful for me, however I'm using PAL@50Hz and not NTSC
> colour encoding.
> 
> http://www.linuxis.us/linux/media/howto/linux-htpc/video_card_configuration.html
> 
> Nice background information.

Right - some nice info. But the vga-sync-fields patch now voids some 
statements. It's no longer true that softdecoders must sync to the graphics
card. In our case it's the other way round. As it should be.

The patch of course is for PAL@50Hz though NTSC should also be possible.

In the meantime I issued a few more releases of the vga-sync-fields patch. 
Version vga-sync-fields-0.0.7 together with xineliboutput Version 1.0.1 or 
newer and parameter setting

xineliboutput.Advanced.LiveModeSync = 0

give very good results for both viewing recordings and Live-TV. The system is
already productive here. I will describe the new setup in my next release.

Cheers
  Thomas
  
thomas Sept. 27, 2008, 6:35 a.m. UTC | #80
a successor of my vga-sync-fields patch (http://lowbyte.de/vga-sync-fields/)
now has been released by 'durchflieger' on 'vdr-portal.de' with far more 
functionality especially for HDTV related things. 

please see:

http://www.vdr-portal.de/board/thread.php?threadid=80567

- sparkie
  
Pasi Kärkkäinen Sept. 27, 2008, 10:25 a.m. UTC | #81
On Sat, Sep 27, 2008 at 08:35:23AM +0200, Thomas Hilber wrote:
> a successor of my vga-sync-fields patch (http://lowbyte.de/vga-sync-fields/)
> now has been released by 'durchflieger' on 'vdr-portal.de' with far more 
> functionality especially for HDTV related things. 
> 
> please see:
> 
> http://www.vdr-portal.de/board/thread.php?threadid=80567
> 

Too bad the text is not english :(

-- Pasi
  
Dex Sept. 27, 2008, 2:55 p.m. UTC | #82
>>
>> please see:
>>
>> http://www.vdr-portal.de/board/thread.php?threadid=80567
>>
>
> Too bad the text is not english :(

Not at all. Because it’s a good motivation to learn one more language :)
  
thomas Sept. 27, 2008, 3:07 p.m. UTC | #83
On Sat, Sep 27, 2008 at 02:55:31PM +0000, Bruno wrote:
> Not at all. Because it?s a good motivation to learn one more language :)

the primary intention of the patch was not a German language lesson
I think:)

the patch itself is written in C and the README is in English.

Cheers
  Thomas
  

Patch

diff -ru xine-lib.org/src/video_out/Makefile.am xine-lib/src/video_out/Makefile.am
--- xine-lib.org/src/video_out/Makefile.am	2007-08-29 21:56:36.000000000 +0200
+++ xine-lib/src/video_out/Makefile.am	2008-07-11 16:29:26.000000000 +0200
@@ -116,7 +116,7 @@ 
 xineplug_vo_out_xshm_la_CFLAGS = $(VISIBILITY_FLAG) $(X_CFLAGS) $(MLIB_CFLAGS) -fno-strict-aliasing
 
 xineplug_vo_out_xv_la_SOURCES = $(X11OSD) deinterlace.c video_out_xv.c
-xineplug_vo_out_xv_la_LIBADD = $(XV_LIBS) $(X_LIBS) $(XINE_LIB) $(PTHREAD_LIBS) $(LTLIBINTL)
+xineplug_vo_out_xv_la_LIBADD = $(XV_LIBS) $(X_LIBS) $(XINE_LIB) $(PTHREAD_LIBS) $(LTLIBINTL) -ldrm
 xineplug_vo_out_xv_la_CFLAGS = $(VISIBILITY_FLAG) $(X_CFLAGS) $(XV_CFLAGS) -fno-strict-aliasing
 
 xineplug_vo_out_xvmc_la_SOURCES = deinterlace.c video_out_xvmc.c
diff -ru xine-lib.org/src/video_out/video_out_xv.c xine-lib/src/video_out/video_out_xv.c
--- xine-lib.org/src/video_out/video_out_xv.c	2008-02-07 18:03:12.000000000 +0100
+++ xine-lib/src/video_out/video_out_xv.c	2008-07-20 21:12:08.000000000 +0200
@@ -73,6 +73,20 @@ 
 #include "vo_scale.h"
 #include "x11osd.h"
 
+#define SYNC_FIELDS
+
+#ifdef SYNC_FIELDS    
+#define LENNY
+#include <sys/ioctl.h>
+#ifdef LENNY
+# include <drm/drm.h>
+#else
+# include <xf86drm.h>
+#endif
+#include <drm/radeon_drm.h>
+extern int drmOpen();
+#endif
+
 #define LOCK_DISPLAY(this) {if(this->lock_display) this->lock_display(this->user_data); \
                             else XLockDisplay(this->display);}
 #define UNLOCK_DISPLAY(this) {if(this->unlock_display) this->unlock_display(this->user_data); \
@@ -827,6 +841,103 @@ 
   LOCK_DISPLAY(this);
   start_time = timeOfDay();
   if (this->use_shm) {
+
+#ifdef SYNC_FIELDS    
+    static int fd;
+    static drm_radeon_vsync_t vsync;
+
+    if (!fd) {
+    	drm_radeon_setparam_t vbl_activate;
+
+        if ((fd = drmOpen("radeon", 0)) < 0) {
+	    printf("drmOpen: %s\n", strerror(errno));
+	}
+	vbl_activate.param = RADEON_SETPARAM_VBLANK_CRTC;
+	vbl_activate.value = DRM_RADEON_VBLANK_CRTC1;
+	if (ioctl(fd, DRM_IOCTL_RADEON_SETPARAM, &vbl_activate)) {
+	    printf("DRM_IOCTL_RADEON_SETPARAM: %s\n", strerror(errno)); 
+	}
+    }
+    if (ioctl(fd, DRM_IOCTL_RADEON_VSYNC, &vsync)) {
+	printf("DRM_IOCTL_RADEON_VSYNC: %s\n", strerror(errno));
+    }
+
+/*
+ * here we continuously monitor and correct placement of xine-lib's
+ * xv_display_frame() call in relation to vertical blanking intervals (VBI)
+ * of graphics card.
+ *
+ * to achieve maximum immunity against jitter we always center the call 
+ * to xv_display_frame() within the middle of 2 consecutive VBIs. 
+ *
+ * there are no special hysteresis requirements:
+ * we even can choose another vbl_trim value each adjacent call
+ *
+ * theory of operation:
+ *
+ *   - a vbl_trim value of 0 yields in a slower graphics card frame rate 
+ *
+ *   - thus increasing the time between 2 VBIs
+ *
+ *   - as a result from xv_display_frame()-call point of view the
+ *     time distance to the last VBI decreases
+ *
+ *   - dependend on value of vsync.vbl_since.tv_usec (the elapsed
+ *     time since last VBI) we decide whether to increase
+ *     or to decrease graphics cards framerate.
+ *
+ *   - illustration of how VBI of graphics card wanders into
+ *     the center of xines xv_display_frame() calls, if graphics card framerate
+ *     is a little slower than xine's call to xv_display_frame()
+ *
+ *   xine reference clock (calls to xv_display_frame()):
+ *   ______          ________          ________          ________
+ *         |________|        |________|        |________|        |_______
+ *          ^                                       ^
+ *          |                                       |
+ *   graphics card clock (VBLANK edges):            |
+ *   _______|          _________           _________|          ________
+ *          |_________|         |_________|         |_________|        |__
+ *          |                                       |
+ *     out of center                             centered
+ *          |                                       |
+ *          | <--- edge drifts into the middle ---> |
+ *          |     of xine reference clock phase     |
+ *
+ * some annotations:
+ *
+ * RGB PAL runs at 50Hz resulting in cycle duration of 20000us
+ * so we ideally would use a SYNC_POINT value of 10000 here.
+ * but we also have to consider some latency until Xserver finally
+ * executes putimage() 
+ *
+ * FIX_ME!
+ * we currently abuse sync mechanism to avoid tearing when 
+ * textured XV adaptor is selected. so we at the moment place SYNC_POINT
+ * close to the end of active display phase. 
+ *
+ * VALUES HERE ARE QUICKLY HACKED AND ARE VALID FOR MY SYSTEM.
+ * YOU MUST ADAPT THEM TO YOUR NEEDS UNTIL AUTOMATIC
+ * TIMING TRIM ALGORITHM WILL BE IMPLEMENTED.
+ */
+
+// tinajas sync collection
+//#define SYNC_POINT       16500 
+//#define VBL_TRIM_UPPER   0x00c84618 /* -351 */
+//#define VBL_TRIM_LOWER   0          /* +341 */
+
+// senitas sync collection
+#define SYNC_POINT       15000 
+#define VBL_TRIM_UPPER   0x00c84605 /* -234 */  
+#define VBL_TRIM_LOWER   0x40c8460b /* +228 */  
+
+    if (vsync.vbl_since.tv_usec < SYNC_POINT) {
+        vsync.vbl_trim = VBL_TRIM_UPPER;
+    } else {
+        vsync.vbl_trim = VBL_TRIM_LOWER;
+    }
+#endif
+
     XvShmPutImage(this->display, this->xv_port,
                   this->drawable, this->gc, this->cur_frame->image,
                   this->sc.displayed_xoffset, this->sc.displayed_yoffset,