Saturday, December 20, 2014

Workgroup reduction function evaluation. How well do they perform?

The initial AMD driver for OpenCL 2.0 has already been released. The latest version of the OpenCL parallel programming API is quite interesting as it supports shared virtual memory, dynamic parallelism, pipes and other features. Among the rest of them are the workgroup and sub-workgroup functions which are abstractions that on one hand simplify parallel primitive operations such as broadcast, scan and reduction operations and provide the opportunity for the compiler for further optimizations on the other.

In order to evaluate the workgroup function performance I developed a test case experiment for a reduction of the sum 1+2+3+...+N. Reduction is implemented in 3 different ways with 3 kernels. The first kernel is performed in the classical manner with shared memory. The last performs the reduction with the workgroup reduction function. The intermediate kernel uses shared memory for the inter-wavefront stages and the subgroup reduction operation for the intra-wavefront stage.

The results seem somehow disappointing. The execution configuration is a 64bit Linux system, with an R7-260X GPU. The results are as follows:

Workgroup and sub-workgroup OpenCL 2.0 function evaluation test case
Platform/Device selection
Total platforms: 1
AMD Accelerated Parallel Processing
 1. Bonaire/Advanced Micro Devices, Inc.
 2. Intel(R) Pentium(R) 4 CPU 3.06GHz/GenuineIntel
Select device index: 
Device info
Platform:       AMD Accelerated Parallel Processing
Device:         Bonaire
Driver version: 1642.5 (VM)
OpenCL version: OpenCL 2.0 AMD-APP (1642.5)
Great! OpenCL 2.0 is supported :)
Building kernel with options "-cl-std=CL2.0 -cl-uniform-work-group-size -DK3 -DK2 -DWAVEFRONT_SIZE=64"

1. Shared memory only kernel
Executing...Done!
Output: 2147450880 / Time: 0.089481 msecs (0.732401 billion elements/second)
PASSED!

2. Hybrid kernel via subgroup functions
Executing...Done!
Output: 2147450880 / Time: 0.215851 msecs (0.303617 billion elements/second)
Relative speed-up to kernel 1: 0.41455
PASSED!

3. Workgroup function kernel
Executing...Done!
Output: 2147450880 / Time: 0.475408 msecs (0.137852 billion elements/second)
Relative speed-up to kernel 1: 0.188219
PASSED!

The kernel with the workgroup function seems to perform more than 5 times slower than using just shared memory. This should definitely not be the case in a performance oriented environment like OpenCL. The performance of workgroup functions should be at least the same as using shared memory. Otherwise the workgroup functions are not essentially useful.

Unfortunately, CodeXL version 1.6 does not support static analysing of OpenCL 2.0 kernels and therefore I cannot inspect the resulting assembly code produced for the workgroup functions. According to theory swizzle operations has to be leveraged in order to optimize such operations.

Test case download link on github:
https://github.com/ekondis/cl2-reduce-bench

In case you notice any different results please let me know.

Tuesday, December 9, 2014

AMD OpenCL 2.0 SDK is available (BETA)

Eventually, the AMD SDK for OpenCL 2.0 has been released in a beta form. There are many examples exhibiting the new features. There are new accompanying documentation files though they are not written from scratch. For instance, table 2.5 in the optimization guide refers to only HD 7xxx devices. It wouldn't be hard for AMD to add the respective tables for Rx 2xx devices. Overall, this is a significant step forward for the OpenCL 2.0 adoption.

The device driver supporting OpenCL 2.0 was also released today.

For more information and download:

Sunday, October 5, 2014

Least required GPU parallelism for kernel executions

GPUs require a vast number of threads per kernel invocation in order to utilize all execution units. As a first thought one should spawn at least the same number of threads as the number of available shader units (or CUDA cores or Processor Elements). However, this is not enough. The type of scheduling should be taken into account. Scheduling in Compute Units is done by multiple schedulers which in effect restricts the group of shader units in which a thread can execute. For instance the Fermi SMs consist of 32 shader units but at least 64 threads are required because 2 schedulers are evident in which the first can schedule threads only on the first group of 16 shader units and the other on the rest group. Thus a greater number of threads is required. What about the rest GPUs? What is the minimum threading required in order to enable all shader units? The answer lies on schedulers of compute units for each GPU architecture.

NVidia Fermi GPUs


Each SM (Compute Unit) consists of 2 schedulers. Each scheduler handles 32 threads (WARP size), thus 2x32=64 threads are the minimum required per SM. For instance a GTX480 with 15 CUs requires at least 960 active threads.















NVidia Kepler GPUs

Each SM (Compute Unit) consists of 4 schedulers. Each scheduler handles 32 threads (WARP size), thus 4x32=128 threads are the minimum requirement per SM. A GTX660 with 8 CUs requires at least 1024 active threads.

In addition, more independent instructions are required in the instruction stream (instruction level parallelism) in order to utilize the extra 64 shaders of each CU (192 in total).



















NVidia Maxwell GPUs

Same as Kepler. A GTX660 with 8 CUs requires at least 1024 active threads. A GTX980 with 16 CUs requires 2048 active threads.

The requirement for instruction independency does not apply here (only 128 threads per CU).




















AMD GCN GPUs

Regarding the AMD GCN units the requirement is more evident. This is because each scheduler handles threads in four groups, one for each SIMD unit. This is like having 4 schedulers per CU. Furthermore the unit of thread execution is done per 64 threads instead of 32. Therefore each CU requires the least of 4x64=256 threads. For instance a R9-280X with 32 CUs require a vast amount of 8192 threads! This fact justifies the reason for which in many research papers the AMD GPUs fail to stand against NVidia GPUs for small problem sizes where the amount of active threads is not enough.



Friday, May 23, 2014

IWOCL 2014 (International Workshop on OpenCL) presentations available online

Presentation files of the the IWOCL (International Workshop on OpenCL) 2014 are available for download.

URL:
http://iwocl.org/iwocl-2014/agenda-and-slides/


Note: The International Workshop on OpenCL (IWOCL) is an annual meeting of OpenCL users, researchers, developers and suppliers to share OpenCL best practise, and to promote the evolution and advancement of the OpenCL standard. The meeting is open to anyone who is interested in contributing to, and participating in the OpenCL community.

Wednesday, April 16, 2014

Loop execution performance comparison in various programming languages

The main focus of a GPU programmer is performance. Therefore the execution time of various time consuming loops is of significant consideration. In this regard I performed some experiments in various programming languages of a small nested loop. The problem investigated is a trivial one though it needs significant number of operations to be performed in a nested loop form.

Problem definition


Search for a pair of integers in the [1..15000] range whose multiple is equal to 87654321.

Loop implementations


A trivial solution of this problem is provided in the following python code:
for i in range(1, 15001):
 for j in range(i+1, 15001):
  if i*j==87654321:
   print "Found! ",str(i)," ",str(j)
   break

Converting the code above to C is straightforward. The code can be easily parallelized using OpenMP constructs by adding a single line:

#pragma omp parallel for private(j) schedule(dynamic,500)
        for(i=1; i<=15000; i++)
                for(j=i+1; j<=15000; j++)
                        if( i*j==87654321 )
                                printf("Found! %d %d\n", i, j);

The schedule parameter directs the compiler to apply dynamic scheduling in order to address the unbalanced nature of the iterations (first outer loop performs 14999 operations while the last one does none).

A naive implementation in OpenCL is also provided. A workitem is assigned to each iteration of the outer loop:

__kernel void factor8to1(unsigned int limit, global int *results){
 int i = get_global_id(0);

 if( i<=limit )
  for(int j=i+1; j<=limit; j++)
   if( i*j==87654321 ){
    results[0] = i;
    results[1] = j;
   }
}

The OpenCL kernel requires to be launched with an NDRange of 15000 workitems. These are not adequate especially for large GPUs but they should be enough for a demo.

Of course this kernel is not well balanced neither optimized, in order to be clear to read and understand. Note that the goal of this project is not to provide an optimized factorization algorithm but to demonstrate the loop code efficiency in various scripting and compiled languages, as well as, to provide a glimpse to the gains of parallel processing.

Code is written in the following languages:
  1.  Python
  2.  JavaScript
  3.  Free pascal compiler
  4.  C
  5.  OpenMP/C
  6.  OpenCL

All sources are provided on github: https://github.com/ekondis/factor87654321

Execution results on A6-1450 APU


Here are provided the execution results of executions on an AMD A6-1450 APU which is a low power processing unit which combines a CPU and a GPU on the same die package. It features a quad core CPU (Jaguar cores) running at 1GHz and a GCN-GPU with 2 compute units (128 processing elements in total).


The benefits of parallel processing are apparent. The advancements of javascript JIT engines are also evident.

Tuesday, February 18, 2014

Maxwell further lowers double precision performance for GeForce GPUs

Now this double precision mockery seems to have no end. For top end Fermi based GPUs the ratio was 1/8 which was just acceptable. For the rest Fermi GPUs the ratio became 1/12. Thereafter, Kepler further reduced it to 1/24. And today we learn that the first Maxwell GPUs further cut it to 1/32!

As long as NVidia wants to sell as more Teslas as it gets we will never be able to achieve acceptable performance in double precision arithmetic from consumer cards. Actually, using a consumer GPU (excluding GTX Titan) for a compute intensive problem does not worth considering the CPU improvements with 256bit AVX2 plus the addition of FMA instructions. And certainly not everyones has 1000$ to waste for a GTX Titan. I would expect a decent double precision performance from a mid-range card of, lets say 300$, but unfortunately that's not the case.

I hope the next architecture dubbed Volta will not emply a 1/128 ratio though it doesn't actually make much difference if it is 1/32, 1/64 or 1/128. These ratios turn double precision compute on consumer cards meaningless.

Source: http://www.tomshardware.com/reviews/geforce-gtx-750-ti-review,3750.html#xtor=RSS-182

Saturday, February 15, 2014

AMD Catalyst 14.1 and OpenCL SPIR

I recently noticed that the AMD Catalyst 14.1 BETA seemed to enable a very interesting extension. Look the extract of the clinfo command bellow executed on a HD-7750:


Number of platforms:     1
  Platform Profile:     FULL_PROFILE
  Platform Version:     OpenCL 1.2 AMD-APP (1411.4)
  Platform Name:     AMD Accelerated Parallel Processing
  Platform Vendor:     Advanced Micro Devices, Inc.
  Platform Extensions:     cl_khr_icd cl_amd_event_callback cl_amd_offline_devices cl_amd_hsa 


  Platform Name:     AMD Accelerated Parallel Processing
Number of devices:     2
  Device Type:      CL_DEVICE_TYPE_GPU
  Device ID:      4098
  Board name:      AMD Radeon HD 7700 Series   
  Device Topology:     PCI[ B#5, D#0, F#0 ]
  Max compute units:     8
  Max work items dimensions:    3
    Max work items[0]:     256
    Max work items[1]:     256
    Max work items[2]:     256
  Max work group size:     256
  Preferred vector width char:    4
  Preferred vector width short:    2
  Preferred vector width int:    1
  Preferred vector width long:    1
  Preferred vector width float:    1
  Preferred vector width double:   1
  Native vector width char:    4
  Native vector width short:    2
  Native vector width int:    1
  Native vector width long:    1
  Native vector width float:    1
  Native vector width double:    1
  Max clock frequency:     820Mhz
  Address bits:      32
  Max memory allocation:    685349273
  Image support:     Yes
  Max number of images read arguments:   128
  Max number of images write arguments:   8
  Max image 2D width:     16384
  Max image 2D height:     16384
  Max image 3D width:     2048
  Max image 3D height:     2048
  Max image 3D depth:     2048
  Max samplers within kernel:    16
  Max size of kernel argument:    1024
  Alignment (bits) of base address:   2048
  Minimum alignment (bytes) for any datatype:  128
  Single precision floating point capability
    Denorms:      No
    Quiet NaNs:      Yes
    Round to nearest even:    Yes
    Round to zero:     Yes
    Round to +ve and infinity:    Yes
    IEEE754-2008 fused multiply-add:   Yes
  Cache type:      Read/Write
  Cache line size:     64
  Cache size:      16384
  Global memory size:     802160640
  Constant buffer size:     65536
  Max number of constant args:    8
  Local memory type:     Scratchpad
  Local memory size:     32768
  Kernel Preferred work group size multiple:  64
  Error correction support:    0
  Unified memory for Host and Device:   0
  Profiling timer resolution:    1
  Device endianess:     Little
  Available:      Yes
  Compiler available:     Yes
  Execution capabilities:     
    Execute OpenCL kernels:    Yes
    Execute native function:    No
  Queue properties:     
    Out-of-Order:     No
    Profiling :      Yes
  Platform ID:      0xb7446660
  Name:       Capeverde
  Vendor:      Advanced Micro Devices, Inc.
  Device OpenCL C version:    OpenCL C 1.2 
  Driver version:     1411.4 (VM)
  Profile:      FULL_PROFILE
  Version:      OpenCL 1.2 AMD-APP (1411.4)
  Extensions:      cl_khr_fp64 cl_amd_fp64 cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_int64_base_atomics cl_khr_int64_extended_atomics cl_khr_3d_image_writes cl_khr_byte_addressable_store cl_khr_gl_sharing cl_ext_atomic_counters_32 cl_amd_device_attribute_query cl_amd_vec3 cl_amd_printf cl_amd_media_ops cl_amd_media_ops2 cl_amd_popcnt cl_khr_image2d_from_buffer cl_khr_spir 

Just look at look at the last line of the supported extensions of the device. There is a magic word called cl_khr_spir! Does this mean that SPIR is already supported by the driver? I don't know and I haven't performed any tests yet. Unfortunately I don't have much time to do it now but if anyone does please let me know.


Tuesday, January 28, 2014

Benchmarking the capabilities of your OpenCL device with clpeak, etc.

In case you're interested in benchmarking the performance of your GPU/CPU with OpenCL you could try a simple program named clpeak. It's hosted on github: https://github.com/krrishnarraj/clpeak

For instance here is the output on the A4-1450 APU.
Platform: AMD Accelerated Parallel Processing
  Device: Kalindi
    Driver version : 1214.3 (VM) (Linux x64)
    Compute units  : 2

    Global memory bandwidth (GBPS)
      float   : 6.60
      float2  : 6.71
      float4  : 6.45
      float8  : 3.51
      float16 : 1.83

    Single-precision compute (GFLOPS)
      float   : 100.63
      float2  : 101.26
      float4  : 100.94
      float8  : 100.32
      float16 : 99.08

    Double-precision compute (GFLOPS)
      double   : 6.35
      double2  : 6.37
      double4  : 6.36
      double8  : 6.34
      double16 : 6.32

    Integer compute (GIOPS)
      int   : 20.33
      int2  : 20.39
      int4  : 20.36
      int8  : 20.33
      int16 : 20.32

    Transfer bandwidth (GBPS)
      enqueueWriteBuffer         : 1.80
      enqueueReadBuffer          : 1.98
      enqueueMapBuffer(for read) : 84.42
        memcpy from mapped ptr   : 1.81
      enqueueUnmap(after write)  : 54.32
        memcpy to mapped ptr     : 1.87

    Kernel launch latency : 138.08 us

  Device: AMD A6-1450 APU with Radeon(TM) HD Graphics
    Driver version : 1214.3 (sse2,avx) (Linux x64)
    Compute units  : 4

    Global memory bandwidth (GBPS)
      float   : 1.97
      float2  : 2.51
      float4  : 1.95
      float8  : 2.79
      float16 : 3.54

    Single-precision compute (GFLOPS)
      float   : 1.30
      float2  : 2.50
      float4  : 5.01
      float8  : 9.21
      float16 : 1.07

    Double-precision compute (GFLOPS)
      double   : 0.62
      double2  : 1.35
      double4  : 2.56
      double8  : 6.27
      double16 : 2.44

    Integer compute (GIOPS)
      int   : 1.60
      int2  : 1.22
      int4  : 4.70
      int8  : 8.08
      int16 : 7.91

    Transfer bandwidth (GBPS)
      enqueueWriteBuffer         : 2.67
      enqueueReadBuffer          : 2.03
      enqueueMapBuffer(for read) : 13489.22
        memcpy from mapped ptr   : 2.02
      enqueueUnmap(after write)  : 26446.84
        memcpy to mapped ptr     : 2.03

    Kernel launch latency : 32.74 us


P.S.
1) Some performance measures of the recently released Kaveri APU are provided on Anandtech:
http://www.anandtech.com/show/7711/floating-point-peak-performance-of-kaveri-and-other-recent-amd-and-intel-chips
2) If you are interested you can find the presentation of the Kaveri on Tech-Day in PDF format here:
http://www.pcmhz.com/media/2014/01-ianuarie/14/amd/AMD-Tech-Day-Kaveri.pdf
3) The Alpha 2 of Ubuntu 14.04 seems to resolve the shutdown problem of the Temash laptop (Acer Aspire v5 122p). It must be due to the 3.13 kernel update. So, I'm looking forward to the final Ubuntu 14.04 release.

Thursday, January 2, 2014

Compute performance with OpenCL on AMD A6-1450 (Temash APU)

Being interested about the modern low powered Kabini/Temash APUs from AMD I was searching the internet for information regarding its compute performance on its GPU. I couldn't find almost anything. Their GPU is supposed to be based on GCN architecture but no more information was available. In addition, the AMD's APP SDK documents are outdated and they do not include any information about this APU. In fact they do not even include any information about the Bonaire GPU (HD 7790 & R7 260X branded cards) which is even older. AMD should definitely change it's policy if they want to be taken seriously about GPU computing. I hope an updated reference guide will be released anytime soon covering all recently released GPUs/APUs (Kabini/Temash, Bonaire, Hawai) and what is about to be released (Kaveri APU).

So I recently I got access to a small form laptop based on the A6-1450 APU (Temash) and I would like to share some experience I had with it. After struggling for 1-2 days to install a Linux distro on it I managed to install the Ubuntu 12.04.3. I couldn't install a recent version (i.e. 13.10) as it needed to initiate a graphics mode and with the supplied kernel it was not possible to execute the installer. 12.04.3 installed ok and thereafter I was able to install the catalyst manually. As I already tested with the Ubuntu 14.04 Alpha 1 this seems to be fixed.

In theory this APU features a quad core Jaguar CPU and a 128 shader GPU (HD 8250) operating at 300MHz with an overclock capability (max 400MHz). Unfortunately, memory is clocked at 1066MHz though I hoped it would be 1333MHz.

As all released APUs this one also supports OpenCL. So, I'll provide some information here to anyone who is interested. First, here is a revealing output of the clinfo tool:

Number of platforms:     1
  Platform Profile:     FULL_PROFILE
  Platform Version:     OpenCL 1.2 AMD-APP (1214.3)
  Platform Name:     AMD Accelerated Parallel Processing
  Platform Vendor:     Advanced Micro Devices, Inc.
  Platform Extensions:     cl_khr_icd cl_amd_event_callback cl_amd_offline_devices


  Platform Name:     AMD Accelerated Parallel Processing
Number of devices:     2
  Device Type:      CL_DEVICE_TYPE_GPU
  Device ID:      4098
  Board name:      AMD Radeon HD 8250
  Device Topology:     PCI[ B#0, D#1, F#0 ]
  Max compute units:     2
  Max work items dimensions:    3
    Max work items[0]:     256
    Max work items[1]:     256
    Max work items[2]:     256
  Max work group size:     256
  Preferred vector width char:    4
  Preferred vector width short:    2
  Preferred vector width int:    1
  Preferred vector width long:    1
  Preferred vector width float:    1
  Preferred vector width double:   1
  Native vector width char:    4
  Native vector width short:    2
  Native vector width int:    1
  Native vector width long:    1
  Native vector width float:    1
  Native vector width double:    1
  Max clock frequency:     400Mhz
  Address bits:      32
  Max memory allocation:    136839168
  Image support:     Yes
  Max number of images read arguments:   128
  Max number of images write arguments:   8
  Max image 2D width:     16384
  Max image 2D height:     16384
  Max image 3D width:     2048
  Max image 3D height:     2048
  Max image 3D depth:     2048
  Max samplers within kernel:    16
  Max size of kernel argument:    1024
  Alignment (bits) of base address:   2048
  Minimum alignment (bytes) for any datatype:  128
  Single precision floating point capability
    Denorms:      No
    Quiet NaNs:      Yes
    Round to nearest even:    Yes
    Round to zero:     Yes
    Round to +ve and infinity:    Yes
    IEEE754-2008 fused multiply-add:   Yes
  Cache type:      Read/Write
  Cache line size:     64
  Cache size:      16384
  Global memory size:     370147328
  Constant buffer size:     65536
  Max number of constant args:    8
  Local memory type:     Scratchpad
  Local memory size:     32768
  Kernel Preferred work group size multiple:  64
  Error correction support:    0
  Unified memory for Host and Device:   1
  Profiling timer resolution:    1
  Device endianess:     Little
  Available:      Yes
  Compiler available:     Yes
  Execution capabilities:     
    Execute OpenCL kernels:    Yes
    Execute native function:    No
  Queue properties:     
    Out-of-Order:     No
    Profiling :      Yes
  Platform ID:      0x00007f1d93cc6fc0
  Name:       Kalindi
  Vendor:      Advanced Micro Devices, Inc.
  Device OpenCL C version:    OpenCL C 1.2 
  Driver version:     1214.3 (VM)
  Profile:      FULL_PROFILE
  Version:      OpenCL 1.2 AMD-APP (1214.3)
  Extensions:      cl_khr_fp64 cl_amd_fp64 cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_int64_base_atomics cl_khr_int64_extended_atomics cl_khr_3d_image_writes cl_khr_byte_addressable_store cl_khr_gl_sharing cl_ext_atomic_counters_32 cl_amd_device_attribute_query cl_amd_vec3 cl_amd_printf cl_amd_media_ops cl_amd_media_ops2 cl_amd_popcnt cl_khr_image2d_from_buffer 


  Device Type:      CL_DEVICE_TYPE_CPU
  Device ID:      4098
  Board name:      
  Max compute units:     4
  Max work items dimensions:    3
    Max work items[0]:     1024
    Max work items[1]:     1024
    Max work items[2]:     1024
  Max work group size:     1024
  Preferred vector width char:    16
  Preferred vector width short:    8
  Preferred vector width int:    4
  Preferred vector width long:    2
  Preferred vector width float:    8
  Preferred vector width double:   4
  Native vector width char:    16
  Native vector width short:    8
  Native vector width int:    4
  Native vector width long:    2
  Native vector width float:    8
  Native vector width double:    4
  Max clock frequency:     600Mhz
  Address bits:      64
  Max memory allocation:    2147483648
  Image support:     Yes
  Max number of images read arguments:   128
  Max number of images write arguments:   8
  Max image 2D width:     8192
  Max image 2D height:     8192
  Max image 3D width:     2048
  Max image 3D height:     2048
  Max image 3D depth:     2048
  Max samplers within kernel:    16
  Max size of kernel argument:    4096
  Alignment (bits) of base address:   1024
  Minimum alignment (bytes) for any datatype:  128
  Single precision floating point capability
    Denorms:      Yes
    Quiet NaNs:      Yes
    Round to nearest even:    Yes
    Round to zero:     Yes
    Round to +ve and infinity:    Yes
    IEEE754-2008 fused multiply-add:   Yes
  Cache type:      Read/Write
  Cache line size:     64
  Cache size:      32768
  Global memory size:     5670133760
  Constant buffer size:     65536
  Max number of constant args:    8
  Local memory type:     Global
  Local memory size:     32768
  Kernel Preferred work group size multiple:  1
  Error correction support:    0
  Unified memory for Host and Device:   1
  Profiling timer resolution:    1
  Device endianess:     Little
  Available:      Yes
  Compiler available:     Yes
  Execution capabilities:     
    Execute OpenCL kernels:    Yes
    Execute native function:    Yes
  Queue properties:     
    Out-of-Order:     No
    Profiling :      Yes
  Platform ID:      0x00007f1d93cc6fc0
  Name:       AMD A6-1450 APU with Radeon(TM) HD Graphics
  Vendor:      AuthenticAMD
  Device OpenCL C version:    OpenCL C 1.2 
  Driver version:     1214.3 (sse2,avx)
  Profile:      FULL_PROFILE
  Version:      OpenCL 1.2 AMD-APP (1214.3)
  Extensions:      cl_khr_fp64 cl_amd_fp64 cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_int64_base_atomics cl_khr_int64_extended_atomics cl_khr_3d_image_writes cl_khr_byte_addressable_store cl_khr_gl_sharing cl_ext_device_fission cl_amd_device_attribute_query cl_amd_vec3 cl_amd_printf cl_amd_media_ops cl_amd_media_ops2 cl_amd_popcnt 

It's good that double precision arithmetic is actually supported on this APU (the brazos APUs did not) and this is actually something I didn't know. I measured the raw performance using FlopsCL (http://olab.is.s.u-tokyo.ac.jp/~kamil.rocki/projects.html) and proved to be 91 GFLOPS on single precision and 6.4 GFLOPS on double precision (which I wasn't sure it supported) arithmetic. It's not the supercomputer you were looking for but think that the whole APU has just 8W TDP.

Next, I measured the effective bandwidth with a custom OpenCL application. This proved to reach near 7GB/sec. It's just ok.

For the last I left the NVidia's nbody simulation (it was included in the CUDA SDKs prior to version 5). With a small modification it can run on AMD GPUs as well (and equally well).
Here is a screenshot:

NBody simulation on Ubuntu
NVidia's nbody sample OpenCL application on A6-1450
Press here for a larger screenshot.

The results are quite good. For a 16384 body benchmark (parameters: --qatest --n=16384)  the APU performed almost 50GFLOP/S (49.67). Let me note here that my 8600GTS did about the same!

On summary, the APU consists a nice mobile development platform for OpenCL applications which supports double precision maths with minimal power footprint.