
Introduction
.png)
Adding mouse input forwarding is fairly simple, you simply need to trap the WM_MOUSEMOVE, along with the WM_LBUTTONDOWN and WM_LBUTTONUP Windows messages. These messages cover pretty much our entire use case, which is the ability to click buttons and detect mouse over events. Handling these events is fairly trivial, as you can see from the snippet below for WM_MOUSEMOVE:
LRESULT OnMouseMove(unsigned message, WPARAM wParam, LPARAM lParam, BOOL & handled) { int xPos = GET_X_LPARAM(lParam); int yPos = GET_Y_LPARAM(lParam); if (m_view) { if (wParam && MK_LBUTTON) m_view->InjectMouseDown(Awesomium::kMouseButton_Left); m_view->InjectMouseMove(xPos, yPos); } return 0;}
If you've been using an nice UI stylesheet, then it should automatically start highlighting things now, and you should be able to tell when you have given focus to things like buttons. More importantly, you can now click various links and if you're using the HTML in the previous post, you can now show and hide the quest tracker.
Getting Feedback From the UI
One of our goals is to allow the UI to tell the game things. For instance, if the user clicks on the skill button at the bottom, we expect it to execute whatever skill is bound to that button. To do this we need to bind a global javascript object to the Awesomium WebView, and then map the C++ functions we desire to call onto javascript functions we add to the global object.
We do this fairly simply, using a map of ID and javascript function name to std::function objects:
m_jsApp = m_view->CreateGlobalJavascriptObject(Awesomium::WSLit("app"));Awesomium::JSObject & appObject = m_jsApp.ToObject();appObject.SetCustomMethod(Awesomium::WSLit("skill"), false);JsCallerKey key(appObject.remote_id(), Awesomium::WSLit("skill"));m_jsFunctions[key] = std::bind(&MainWindow::OnSkill, this, std::placeholders::_1, std::placeholders::_2);
In this case we're binding the OnSkill non-static member function to the javascript function "skill" in the "app" object. We could have also used a lambda here, or a static function as well.
Of course, since there's no actual relationship between the javascript function name and the C++ function, we need to build a binding system. Thankfully, Awesomium comes with a method handler which allows it to notify us whenever a javascript function is invoked on our global object. In our case, for simplicity, we implement the interface on the MainWindow class, however in general I would actually recommend implementing this on a separate object entirely.
m_view->set_js_method_handler(this);
After this we just have to implement the two methods it requires, OnMethodCall and OnMethodCallWithReturnValue, and have them query our map for any functions that match the object ID and function name specified. If the function is found, we invoke it with the expected parameters:
void OnMethodCall(Awesomium::WebView * caller, unsigned remoteObjectId, Awesomium::WebString const & methodName, Awesomium::JSArray const & args) { JsCallerKey key(remoteObjectId, methodName); auto itor = m_jsFunctions.find(key); if (itor != m_jsFunctions.end()) { itor->second(caller, args); }}
With this in place, and our app.skill function bound, our HTML can trivially invoke it:
We now have the capability to allow the Awesomium UI to communicate with our game in a meaningful and event driven manner.
More Efficient Rendering
One of the other problems we're going to encounter is determining when input should be directed to the UI layer, and when input should be directed to the game systems.
Along with this we also find ourselves in a position to do a bit of optimization of our rendering. In our previous code we were using the UpdateSubresource call to update portions of our texture (created with D3D11_USAGE_DEFAULT). This has several issues:
- It creates a copy of the memory passed into it.
- We cannot later query for information about the UI overlay
- The source and destination textures must be in the same format.
Now, we're not going to be changing the backing format (although you might want to for various reasons). However, by switching out to a better method we can reduce our overall overhead, allow us to query for information from the texture, and also give us the ability to alter change said formats.
Our methodology will be to use a staging texture with the D3D11_CPU_ACCESS_READ and D3D11_CPU_ACCESS_WRITE flags. Why read? The simple answer is: we will eventually want to know when a pixel is transparent to the UI. This way we can determine if the mouse is currently over a UI element, or if the mouse is in the gameplay view.
For updating the rendered texture, we simply map our staging resource, and then we run through a series of memcpy calls to copy each changed row of the texture over:
D3D11_MAPPED_SUBRESOURCE resource;m_context->Map(m_staging, 0, D3D11_MAP_WRITE, 0, &resource);auto srcStartingOffset = srcRowSpan * srcRect.y + srcRect.x * 4;uint8_t * srcPtr = srcBuffer + srcStartingOffset;auto dstStartingOffset = resource.RowPitch * destRect.y + destRect.x * 4;uint8_t * dataPtr = reinterpret_cast(resource.pData) + dstStartingOffset;for (int i = 0; i < destRect.height; ++i) { memcpy(dataPtr + resource.RowPitch * i, srcPtr + srcRowSpan * i, destRect.width * 4);}m_context->Unmap(m_staging, 0);
Once that's complete, we can simply ask Direct3D11 to copy the updated portion of the staging texture over to our rendered texture:
m_context->CopySubresourceRegion(m_texture, 0, destRect.x, destRect.y, 0, m_staging, 0, &box);
With this in hand, we can also map our staging texture in for reading, and simply ask it if a particular pixel (at an X,Y position) is fully:
bool IsUIPixel(unsigned x, unsigned y) { D3D11_MAPPED_SUBRESOURCE resource; m_context->Map(m_staging, 0, D3D11_MAP_READ, 0, &resource); auto startingOffset = (m_width * y + x) * 4; uint8_t * dataPtr = reinterpret_cast(resource.pData) + startingOffset; bool result = *dataPtr != 0; m_context->Unmap(m_staging, 0); return result;}
This function will return true if the pixel queried has any opaqueness to it (i.e. partial transparency).
Full Sample
#define NOMINMAX#include #include #include #include #include #include #include #include #include #include #pragma comment(lib, "d3d11.lib")#pragma comment(lib, "awesomium.lib")#include #include #include #include #include #include #include #include
.png)
Adding mouse input forwarding is fairly simple, you simply need to trap the WM_MOUSEMOVE, along with the WM_LBUTTONDOWN and WM_LBUTTONUP Windows messages. These messages cover pretty much our entire use case, which is the ability to click buttons and detect mouse over events. Handling these events is fairly trivial, as you can see from the snippet below for WM_MOUSEMOVE:
LRESULT OnMouseMove(unsigned message, WPARAM wParam, LPARAM lParam, BOOL & handled) { int xPos = GET_X_LPARAM(lParam); int yPos = GET_Y_LPARAM(lParam); if (m_view) { if (wParam && MK_LBUTTON) m_view->InjectMouseDown(Awesomium::kMouseButton_Left); m_view->InjectMouseMove(xPos, yPos); } return 0;}
If you've been using an nice UI stylesheet, then it should automatically start highlighting things now, and you should be able to tell when you have given focus to things like buttons. More importantly, you can now click various links and if you're using the HTML in the previous post, you can now show and hide the quest tracker.Getting Feedback From the UI
One of our goals is to allow the UI to tell the game things. For instance, if the user clicks on the skill button at the bottom, we expect it to execute whatever skill is bound to that button. To do this we need to bind a global javascript object to the Awesomium WebView, and then map the C++ functions we desire to call onto javascript functions we add to the global object.
We do this fairly simply, using a map of ID and javascript function name to std::function objects:
m_jsApp = m_view->CreateGlobalJavascriptObject(Awesomium::WSLit("app"));Awesomium::JSObject & appObject = m_jsApp.ToObject();appObject.SetCustomMethod(Awesomium::WSLit("skill"), false);JsCallerKey key(appObject.remote_id(), Awesomium::WSLit("skill"));m_jsFunctions[key] = std::bind(&MainWindow::OnSkill, this, std::placeholders::_1, std::placeholders::_2);
In this case we're binding the OnSkill non-static member function to the javascript function "skill" in the "app" object. We could have also used a lambda here, or a static function as well.
Of course, since there's no actual relationship between the javascript function name and the C++ function, we need to build a binding system. Thankfully, Awesomium comes with a method handler which allows it to notify us whenever a javascript function is invoked on our global object. In our case, for simplicity, we implement the interface on the MainWindow class, however in general I would actually recommend implementing this on a separate object entirely.
m_view->set_js_method_handler(this);
After this we just have to implement the two methods it requires, OnMethodCall and OnMethodCallWithReturnValue, and have them query our map for any functions that match the object ID and function name specified. If the function is found, we invoke it with the expected parameters:
void OnMethodCall(Awesomium::WebView * caller, unsigned remoteObjectId, Awesomium::WebString const & methodName, Awesomium::JSArray const & args) { JsCallerKey key(remoteObjectId, methodName); auto itor = m_jsFunctions.find(key); if (itor != m_jsFunctions.end()) { itor->second(caller, args); }}
With this in place, and our app.skill function bound, our HTML can trivially invoke it:
We now have the capability to allow the Awesomium UI to communicate with our game in a meaningful and event driven manner.
More Efficient Rendering
One of the other problems we're going to encounter is determining when input should be directed to the UI layer, and when input should be directed to the game systems.
Along with this we also find ourselves in a position to do a bit of optimization of our rendering. In our previous code we were using the UpdateSubresource call to update portions of our texture (created with D3D11_USAGE_DEFAULT). This has several issues:
- It creates a copy of the memory passed into it.
- We cannot later query for information about the UI overlay
- The source and destination textures must be in the same format.
Now, we're not going to be changing the backing format (although you might want to for various reasons). However, by switching out to a better method we can reduce our overall overhead, allow us to query for information from the texture, and also give us the ability to alter change said formats.
Our methodology will be to use a staging texture with the D3D11_CPU_ACCESS_READ and D3D11_CPU_ACCESS_WRITE flags. Why read? The simple answer is: we will eventually want to know when a pixel is transparent to the UI. This way we can determine if the mouse is currently over a UI element, or if the mouse is in the gameplay view.
For updating the rendered texture, we simply map our staging resource, and then we run through a series of memcpy calls to copy each changed row of the texture over:
D3D11_MAPPED_SUBRESOURCE resource;m_context->Map(m_staging, 0, D3D11_MAP_WRITE, 0, &resource);auto srcStartingOffset = srcRowSpan * srcRect.y + srcRect.x * 4;uint8_t * srcPtr = srcBuffer + srcStartingOffset;auto dstStartingOffset = resource.RowPitch * destRect.y + destRect.x * 4;uint8_t * dataPtr = reinterpret_cast(resource.pData) + dstStartingOffset;for (int i = 0; i < destRect.height; ++i) { memcpy(dataPtr + resource.RowPitch * i, srcPtr + srcRowSpan * i, destRect.width * 4);}m_context->Unmap(m_staging, 0);
Once that's complete, we can simply ask Direct3D11 to copy the updated portion of the staging texture over to our rendered texture:
m_context->CopySubresourceRegion(m_texture, 0, destRect.x, destRect.y, 0, m_staging, 0, &box);
With this in hand, we can also map our staging texture in for reading, and simply ask it if a particular pixel (at an X,Y position) is fully:
bool IsUIPixel(unsigned x, unsigned y) { D3D11_MAPPED_SUBRESOURCE resource; m_context->Map(m_staging, 0, D3D11_MAP_READ, 0, &resource); auto startingOffset = (m_width * y + x) * 4; uint8_t * dataPtr = reinterpret_cast(resource.pData) + startingOffset; bool result = *dataPtr != 0; m_context->Unmap(m_staging, 0); return result;}
This function will return true if the pixel queried has any opaqueness to it (i.e. partial transparency).
Full Sample
#define NOMINMAX#include #include #include #include #include #include #include #include #include #include #pragma comment(lib, "d3d11.lib")#pragma comment(lib, "awesomium.lib")#include #include #include #include #include #include #include #include
One of the other problems we're going to encounter is determining when input should be directed to the UI layer, and when input should be directed to the game systems.
Along with this we also find ourselves in a position to do a bit of optimization of our rendering. In our previous code we were using the UpdateSubresource call to update portions of our texture (created with D3D11_USAGE_DEFAULT). This has several issues:
- It creates a copy of the memory passed into it.
- We cannot later query for information about the UI overlay
- The source and destination textures must be in the same format.
Now, we're not going to be changing the backing format (although you might want to for various reasons). However, by switching out to a better method we can reduce our overall overhead, allow us to query for information from the texture, and also give us the ability to alter change said formats.
Our methodology will be to use a staging texture with the D3D11_CPU_ACCESS_READ and D3D11_CPU_ACCESS_WRITE flags. Why read? The simple answer is: we will eventually want to know when a pixel is transparent to the UI. This way we can determine if the mouse is currently over a UI element, or if the mouse is in the gameplay view.
For updating the rendered texture, we simply map our staging resource, and then we run through a series of memcpy calls to copy each changed row of the texture over:D3D11_MAPPED_SUBRESOURCE resource;m_context->Map(m_staging, 0, D3D11_MAP_WRITE, 0, &resource);auto srcStartingOffset = srcRowSpan * srcRect.y + srcRect.x * 4;uint8_t * srcPtr = srcBuffer + srcStartingOffset;auto dstStartingOffset = resource.RowPitch * destRect.y + destRect.x * 4;uint8_t * dataPtr = reinterpret_cast
Once that's complete, we can simply ask Direct3D11 to copy the updated portion of the staging texture over to our rendered texture:(resource.pData) + dstStartingOffset;for (int i = 0; i < destRect.height; ++i) { memcpy(dataPtr + resource.RowPitch * i, srcPtr + srcRowSpan * i, destRect.width * 4);}m_context->Unmap(m_staging, 0); m_context->CopySubresourceRegion(m_texture, 0, destRect.x, destRect.y, 0, m_staging, 0, &box);
With this in hand, we can also map our staging texture in for reading, and simply ask it if a particular pixel (at an X,Y position) is fully:bool IsUIPixel(unsigned x, unsigned y) { D3D11_MAPPED_SUBRESOURCE resource; m_context->Map(m_staging, 0, D3D11_MAP_READ, 0, &resource); auto startingOffset = (m_width * y + x) * 4; uint8_t * dataPtr = reinterpret_cast
This function will return true if the pixel queried has any opaqueness to it (i.e. partial transparency).(resource.pData) + startingOffset; bool result = *dataPtr != 0; m_context->Unmap(m_staging, 0); return result;}
Full Sample
#define NOMINMAX#include
#include #include #include #include #include #include #include #include #include #pragma comment(lib, "d3d11.lib")#pragma comment(lib, "awesomium.lib")#include #include #include #include #include #include #include #include
- Read more...
-
- 4 comments
- 2332 views