Logic is the beginning of wisdom. Not always we use logic to determine our actions, but the more we try, the better decisions we will take.
Today CPUs and GPUs are separate entities. CPU is the place where applications are being executed on, and the GPU is a compute offloaded device, where part of the applications can be executed on. Even if the entire application can be executed on a GPU, still GPU computing requires a CPU for management.
Three version of GPUs exist, or being talked about – NVIDIA (with CUDA), AMD (with OpenCL) and Intel (with C++). Each GPU has its own programming interface which makes it difficult to write a single program to fit them all. Some say that in the future we will end with a single language, but while it is an option, there is still a room for multiple interfaces – same as we have MPI and SHMEM.
With regarding to integrated solutions, AMD has their Fusion program in which CPU and GPU are supposed to be united into a single chip solution. NVIDIA announced their "Project Denver" to build custom CPU cores based on ARM architecture. Intel could definitely do the same. Does it mean that GPUs stop to exist? Logic says no. Today GPUs are much more powerful than standard CPUs, and we can assume that all the development around GPUs will continue. Therefore a gap between the GPU and CPU capabilities will continue to exist, which means that integration, while possible, will still provide the ability to customize the single chip to be “GPU flavor” or a “CPU flavor” one. The united solution can definitely share the same socket design or be provided as an adding card, but this is only the physical packaging.
In other words, I believe that Hybrid computing will continue to become an important architecture in the future, and modifying the compute offloaded devices to be more as “service providers” will make future usage earlier and more flexible.