Right now we default to "all" which gives us a situation much like before,
but when the "typical" option is implemented, we can change the default and
reduce the number of visuals the GLX module bloats the X server with.
Instead of the fragile setup where we filter the modes common between the
DDX generated GLX visuals and the DRI driver generated fbconfigs, we now
just take the fbconfigs returned by the DRI driver to be our supported set.
A lot of EDID writers apparently end up stuffing centimeters (like the
maximum image size field) into the detailed timings, instead of millimeters.
Some of them only get it wrong in one direction. Also, add a quirk to let
us mark the largest 75hz mode as preferred, which will often be used for
EDID 1.0 CRTs.
If none is present, a default one will be created. This will be attached
to either the first device section in the xorg.conf (allowing you to
specify something like using EXA without having a screen section) or a
default screen section if none is present in the file.
This will allow the screen to not explicitly have a device section. If
this is the case and there is a device section in the xorg.conf, the first
one will be used. If there is no device section at all, a default one will
be created that loads the automatically determined module.
This is what we're currently shipping in Debian. Enables the ability for
drivers to ship a text file listing PCI ID's they support, and have the
server read them on startup when no driver is specified. This works, but
isn't the final solution.
DGAStealXXXEvent modified to take in device argument.
The evdev driver only sends one valuator when only one axis changed. We need
to check for DGA either way (xf86PostMotionEventP), otherwise we lose purely
horizontal/vertical movements.
Note that DGA does not do XI events.
Center the frame around the first pointer found and then update all pointers
on the same screen to move to the edges (if necessary).
Note: xf86WarpCursor needs to be modified, is using deprecated
miPointerWarpCursor and will kill the server when called with
inputInfo.pointer.
Removes "LookupKeyboardDevice" and "LookupPointerDevice" in favor of
inputInfo.keyboard and inputInfo.pointer, respectively; all use cases
are non-XI compliant anyway.
Matches linuxPci.c changes made in 8279444a54
Fixes compiler errors:
"ix86Pci.c", line 194: too many struct/union initializers
"ix86Pci.c", line 204: too many struct/union initializers
"ix86Pci.c", line 214: too many struct/union initializers
When processing events from the EQ, _always_ call the processInputProc of the
matching device. For XI devices, this proc is wrapped in three layers.
Core event handling is wrapped by XI event handling, which is wrapped by XKB.
A core event now passes through XKB -> XI -> DIX.
This gets rid of a sync'd grab problem: with the previous code, core events
did disappear during a sync'd device grab on account of mieqProcessInputEvents
calling the processInputProc of the VCP/VCK instead of the actual device. This
lead to the event being processed as normal instead of being enqueued for
later replaying.
Even though they're defined to zero by the spec, we've seen an EDID block
where the (empty) ASCII strings were stuffed in a byte early, leading to the
descriptor being considered a detailed timing instead.