Several EmuGL files were missing copyright notices; added them. Others
had them but below other code, which makes checking for them more
difficult -- I moved these to the top of the file.
These files were all created by GraphTech as a work for hire for AOSP.
Change-Id: Iee67bcf1c9b54ad5aa8a9288ded0382a974c32cb
On Macs running OS X 10.6 and 10.7 with Intel HD Graphics 3000, some
screens or parts of the screen are displayed upside down. The exact
conditions/sequence that triggers this aren't known yet; I haven't
been able to reproduce it in a standalone test. This also means I
don't know whether it is a driver bug, or a bug in the OpenglRender or
Translator code that just happens to work elsewhere.
Thanks to zhiyuan.li@intel.com for a patch this change is based on.
Change-Id: I04823773818d3b587a6951be48e70b03804b33d0
1. "emugen" generates four *dec.cpp files containing code like this
to decode offset to pointer in stream
tmp = *(T *)(ptr + 8 + 4 + 4 + 4 + *(size_t *)(ptr +8 + 4 + 4));
If *dec.cpp are compiled in 64-bit, size_t is 8-byte and dereferencing of
it is likley to get wild offset for dereferencing of *(T *) to crash the
code. Solution is to define tsize_t for "target size_t" instead
of using host size_t.
2. Cast pointer to "uintptr_t" instead of "unsigned int" for 2nd param of
ShareGroup::getGlobalName(NamedObjectType, ObjectLocalName/*64bit*/).
3. Instance of EGLSurface, EGLContext and EGLImageKHR are used as 32-bit
key for std::map< unsigned int, * > SurfacesHndlMap, ContextsHndlMap,
and ImagesHndlMap, respectively. Cast pointer to uintptr_t and assert
upper 32-bit is zero before passing to map::find().
4. Instance of GLeglImageOES is used to eglAttachEGLImage() which expect
"unsigned int". Cast it to uintptr_t and assert upper 32-bit is zero.
5. The 5th param to GLEScontext::setPointer is GLvoid* but contains 32-bit
offset to vbo if bufferName exists. Cast it to uintptr_t and assert
upper 32-bit is zero.
6. Use %zu instead of %d to print size_t
7. Cast pointer to (uintptr_t) in many other places
Change-Id: Iba6e5bda08c43376db5b011e9d781481ee1f5a12
Whenever a surface was attached to a context, it was dequeing a new
buffer, and enqueing it when detached. This has the effect of doing a
SwapBuffers on detach/attach cycle, which is just wrong and
occasionally caused visible glitches (e.g. animations going backwards
for one frame). It also broke some SurfaceTexture tests which
(validly) depend on specific buffer production/consumption counts.
Change-Id: Ibd4761e8842871b79fd9edf52272900193cb672d
Pass the swap interval from eglSwapInterval to the native window so it
can enable/disable SurfaceTexture's async mode. Fixes the deadlock in
SurfaceTextureGLToGLTest.EglDestroySurfaceUnrefsBuffers.
Change-Id: I19bf69247341f5617223722df63d6c7f8cf389c6
* EGLImageTargetRenderbufferStorageOES was incorrectly accepting
TEXTURE_EXTERNAL_OES as a target. Revert that; the host GL will
correctly reject it with INVALID_ENUM.
* Handle the REQUIRED_TEXTURE_IMAGE_UNITS_OES texparameter query.
* Validate texture parameters set on TEXTURE_EXTERNAL textures;
otherwise invalid parameters would work on the emulator but not on a
real device.
Change-Id: I49a088608d58a9822f33e5916bd354eee3709127
The gralloc API assumes system-wide reference counting of gralloc
buffers. The host-GL accelerated gralloc maps buffers to host-side
ColorBuffer objects, but was destroying them unconditionally in
gralloc_free(), ignoring any additional references from
gralloc_register_buffer().
This affected the SurfaceTexture gralloc buffers used by the
Browser/WebView. For some reason these buffers are actually allocated
by SurfaceFlinger and passed back to the WebView through Binder. But
since SurfaceFlinger doesn't actually need the buffer for anything,
sometime after the WebView has called gralloc_register_buffer()
SurfaceFlinger calls gralloc_free() on it. This caused the host
ColorBuffer to be destroyed long before the WebView is done using it.
Change-Id: I33dbee887a48a6907041cf19e9f38a1f6c983eff
Copy changes faaf1553cf and
f37a7ed6c5 from the GLESv1 translator to
the GLESv2 translator. After this, both translators use the same logic
for glEGLImageTargetTexture2DOES().
Change-Id: I0a95bf2301df7b7428abc593f38170edf4cbda30
Off-by-two bug when removing textures from the tracking array could
overwrite malloc's mem chunk data structure, usually resulting in a
heap corruption abort on a later malloc/realloc/free.
Bug: 5951738
Change-Id: I11056bb62883373c2a3403f53899347ff8cdabf2
The data pointer argument to glBufferData can be NULL; this
[re]allocates the buffer while leaving the contents undefined.
Bug: 5833436
Change-Id: Ia1ddf62e2cd2c59d3d631e01d23d7c557ca5a52e
* Disable verbose debug spam.
* Add missing GL enum to utility function. The default case was
returning the correct size, so this doesn't fix any bugs, just
removes some logcat spam.
* Comment and whitespace corrections.
Change-Id: I83fb8644331ae1072d6a8dae9c041da92073089f
The code that creates the GL-accelerated screen view wasn't converting
the upper-left-relative coordinates used within the emulator to the
lower-left coordinates used by the Cocoa APIs on OS X. Since most
skins have the screen view centered vertically this often just
happened to work.
Bug: 5782118
Change-Id: I2f96ee181e850df5676d10a82d86c94421149b40
The emulator EGL implementation tried to hold its own reference to
buffers acquired/released with dequeueBuffer/queueBuffer, but was
missing an incRef after dequeueBuffer during swapBuffers.
Since the native window holds a reference to the buffer between
dequeueBuffer and queueBuffer, the EGL reference isn't needed anyway.
Change-Id: I95e4f9f4faf59198f99939cdca6603fe176c56bc
The glBufferData, glBufferSubData, and glDeleteBuffers entry points
had interception routines in GL2Encoder which cache the data, but they
weren't hooked up. So when glDrawElements tried to retrieve the cached
data it wasn't there.
Change-Id: Iaed11fccaefab3186485be53a0f15c8ca0a255f9