Jump to content
  • Advertisement
  • entries
    146
  • comments
    436
  • views
    198489

Sweet Snippets - Handling Input and Callbacks with Awesomium

Sign in to follow this  
Washu

2447 views

In our previous entry we started rendering a UI overlay on our application. We added some basic interactivity in that we can update a health bar by sending javascript commands. However, what if we want the UI to be able to notify us of actions? That's what we're going to cover in this entry. In addition, we're going to add mouse input forwarding to Awesomium, so that it can respond to events that can be triggered through mouse input.

Introduction


Screenshot%202015-01-03%2019.55.45(2).png


Adding mouse input forwarding is fairly simple, you simply need to trap the WM_MOUSEMOVE, along with the WM_LBUTTONDOWN and WM_LBUTTONUP Windows messages. These messages cover pretty much our entire use case, which is the ability to click buttons and detect mouse over events. Handling these events is fairly trivial, as you can see from the snippet below for WM_MOUSEMOVE:
LRESULT OnMouseMove(unsigned message, WPARAM wParam, LPARAM lParam, BOOL & handled) { int xPos = GET_X_LPARAM(lParam); int yPos = GET_Y_LPARAM(lParam); if (m_view) { if (wParam && MK_LBUTTON) m_view->InjectMouseDown(Awesomium::kMouseButton_Left); m_view->InjectMouseMove(xPos, yPos); } return 0;}If you've been using an nice UI stylesheet, then it should automatically start highlighting things now, and you should be able to tell when you have given focus to things like buttons. More importantly, you can now click various links and if you're using the HTML in the previous post, you can now show and hide the quest tracker.

Getting Feedback From the UI


One of our goals is to allow the UI to tell the game things. For instance, if the user clicks on the skill button at the bottom, we expect it to execute whatever skill is bound to that button. To do this we need to bind a global javascript object to the Awesomium WebView, and then map the C++ functions we desire to call onto javascript functions we add to the global object.

We do this fairly simply, using a map of ID and javascript function name to std::function objects:
m_jsApp = m_view->CreateGlobalJavascriptObject(Awesomium::WSLit("app"));Awesomium::JSObject & appObject = m_jsApp.ToObject();appObject.SetCustomMethod(Awesomium::WSLit("skill"), false);JsCallerKey key(appObject.remote_id(), Awesomium::WSLit("skill"));m_jsFunctions[key] = std::bind(&MainWindow::OnSkill, this, std::placeholders::_1, std::placeholders::_2);In this case we're binding the OnSkill non-static member function to the javascript function "skill" in the "app" object. We could have also used a lambda here, or a static function as well.

Of course, since there's no actual relationship between the javascript function name and the C++ function, we need to build a binding system. Thankfully, Awesomium comes with a method handler which allows it to notify us whenever a javascript function is invoked on our global object. In our case, for simplicity, we implement the interface on the MainWindow class, however in general I would actually recommend implementing this on a separate object entirely.
m_view->set_js_method_handler(this);After this we just have to implement the two methods it requires, OnMethodCall and OnMethodCallWithReturnValue, and have them query our map for any functions that match the object ID and function name specified. If the function is found, we invoke it with the expected parameters:
void OnMethodCall(Awesomium::WebView * caller, unsigned remoteObjectId, Awesomium::WebString const & methodName, Awesomium::JSArray const & args) { JsCallerKey key(remoteObjectId, methodName); auto itor = m_jsFunctions.find(key); if (itor != m_jsFunctions.end()) { itor->second(caller, args); }}With this in place, and our app.skill function bound, our HTML can trivially invoke it:
We now have the capability to allow the Awesomium UI to communicate with our game in a meaningful and event driven manner.

More Efficient Rendering


One of the other problems we're going to encounter is determining when input should be directed to the UI layer, and when input should be directed to the game systems.

Along with this we also find ourselves in a position to do a bit of optimization of our rendering. In our previous code we were using the UpdateSubresource call to update portions of our texture (created with D3D11_USAGE_DEFAULT). This has several issues:

  • It creates a copy of the memory passed into it.
  • We cannot later query for information about the UI overlay
  • The source and destination textures must be in the same format.
    Now, we're not going to be changing the backing format (although you might want to for various reasons). However, by switching out to a better method we can reduce our overall overhead, allow us to query for information from the texture, and also give us the ability to alter change said formats.

    Our methodology will be to use a staging texture with the D3D11_CPU_ACCESS_READ and D3D11_CPU_ACCESS_WRITE flags. Why read? The simple answer is: we will eventually want to know when a pixel is transparent to the UI. This way we can determine if the mouse is currently over a UI element, or if the mouse is in the gameplay view.

    For updating the rendered texture, we simply map our staging resource, and then we run through a series of memcpy calls to copy each changed row of the texture over:
    D3D11_MAPPED_SUBRESOURCE resource;m_context->Map(m_staging, 0, D3D11_MAP_WRITE, 0, &resource);auto srcStartingOffset = srcRowSpan * srcRect.y + srcRect.x * 4;uint8_t * srcPtr = srcBuffer + srcStartingOffset;auto dstStartingOffset = resource.RowPitch * destRect.y + destRect.x * 4;uint8_t * dataPtr = reinterpret_cast(resource.pData) + dstStartingOffset;for (int i = 0; i < destRect.height; ++i) { memcpy(dataPtr + resource.RowPitch * i, srcPtr + srcRowSpan * i, destRect.width * 4);}m_context->Unmap(m_staging, 0);Once that's complete, we can simply ask Direct3D11 to copy the updated portion of the staging texture over to our rendered texture:
    m_context->CopySubresourceRegion(m_texture, 0, destRect.x, destRect.y, 0, m_staging, 0, &box);With this in hand, we can also map our staging texture in for reading, and simply ask it if a particular pixel (at an X,Y position) is fully:
    bool IsUIPixel(unsigned x, unsigned y) { D3D11_MAPPED_SUBRESOURCE resource; m_context->Map(m_staging, 0, D3D11_MAP_READ, 0, &resource); auto startingOffset = (m_width * y + x) * 4; uint8_t * dataPtr = reinterpret_cast(resource.pData) + startingOffset; bool result = *dataPtr != 0; m_context->Unmap(m_staging, 0); return result;}This function will return true if the pixel queried has any opaqueness to it (i.e. partial transparency).

    Full Sample


    #define NOMINMAX#include #include #include #include #include #include #include #include #include #include #pragma comment(lib, "d3d11.lib")#pragma comment(lib, "awesomium.lib")#include #include #include #include #include #include #include #include #include #ifdef UNICODEtypedef wchar_t tchar;typedef std::wstring tstring;templatetstring to_string(T t) { return std::to_wstring(t);}#elsetypedef char tchar;typedef std::string tstring;templatetstring to_string(T t) { return std::to_string(t);}#endifstruct Vertex { float position[4]; float color[4]; float texCoord[2]; static const unsigned Stride = sizeof(float) * 10; static const unsigned Offset = 0;};void ThrowIfFailed(HRESULT result, std::string const & text) { if (FAILED(result)) throw std::runtime_error(text + "");}class RenderTarget {public: RenderTarget(ID3D11Texture2D * texture, bool hasDepthBuffer) : m_texture(texture) { CComPtr device; texture->GetDevice(&device); auto result = device->CreateRenderTargetView(m_texture, nullptr, &m_textureRTV); ThrowIfFailed(result, "Failed to create back buffer render target."); m_viewport = CD3D11_VIEWPORT(m_texture, m_textureRTV); result = device->CreateTexture2D(&CD3D11_TEXTURE2D_DESC(DXGI_FORMAT_D32_FLOAT, static_cast(m_viewport.Width), static_cast(m_viewport.Height), 1, 1, D3D11_BIND_DEPTH_STENCIL), nullptr, &m_depthBuffer); ThrowIfFailed(result, "Failed to create depth buffer."); result = device->CreateDepthStencilView(m_depthBuffer, nullptr, &m_depthView); ThrowIfFailed(result, "Failed to create depth buffer render target."); } void Clear(ID3D11DeviceContext * context, float color[4], bool clearDepth = true) { context->ClearRenderTargetView(m_textureRTV, color); if (clearDepth && m_depthView) context->ClearDepthStencilView(m_depthView, D3D11_CLEAR_DEPTH, 1.0f, 0); } void SetTarget(ID3D11DeviceContext * context) { context->OMSetRenderTargets(1, &m_textureRTV.p, m_depthView); context->RSSetViewports(1, &m_viewport); }private: D3D11_VIEWPORT m_viewport; CComPtr m_depthBuffer; CComPtr m_depthView; CComPtr m_texture; CComPtr m_textureRTV;};class GraphicsDevice {public: GraphicsDevice(HWND window, int width, int height) { D3D_FEATURE_LEVEL levels[] = { D3D_FEATURE_LEVEL_11_1, D3D_FEATURE_LEVEL_11_0, D3D_FEATURE_LEVEL_10_1, D3D_FEATURE_LEVEL_10_0, }; DXGI_SWAP_CHAIN_DESC desc = { { width, height, { 1, 60 }, DXGI_FORMAT_R8G8B8A8_UNORM, DXGI_MODE_SCANLINE_ORDER_UNSPECIFIED, DXGI_MODE_SCALING_UNSPECIFIED }, { 1, 0 }, DXGI_USAGE_BACK_BUFFER | DXGI_USAGE_RENDER_TARGET_OUTPUT, 1, window, TRUE, DXGI_SWAP_EFFECT_DISCARD, 0 }; auto result = D3D11CreateDeviceAndSwapChain( nullptr, D3D_DRIVER_TYPE_HARDWARE, nullptr, D3D11_CREATE_DEVICE_DEBUG | D3D11_CREATE_DEVICE_BGRA_SUPPORT, levels, sizeof(levels) / sizeof(D3D_FEATURE_LEVEL), D3D11_SDK_VERSION, &desc, &m_swapChain, &m_device, &m_featureLevel, &m_context ); ThrowIfFailed(result, "Failed to create D3D11 device."); } void Resize(int width, int height) { if (m_renderTarget) m_renderTarget.reset(); auto result = m_swapChain->ResizeBuffers(1, width, height, DXGI_FORMAT_UNKNOWN, 0); ThrowIfFailed(result, "Failed to resize back buffer."); CComPtr backBuffer; result = m_swapChain->GetBuffer(0, __uuidof(ID3D11Texture2D), reinterpret_cast(&backBuffer)); ThrowIfFailed(result, "Failed to retrieve back buffer surface."); m_renderTarget = std::make_unique(backBuffer, true); } void SetAndClearTarget() { static float color[] { 0, 0, 0, 0}; if (!m_renderTarget) return; m_renderTarget->Clear(m_context, color); m_renderTarget->SetTarget(m_context); } void Present() { m_swapChain->Present(0, 0); } ID3D11Device * GetDevice() { return m_device; } ID3D11DeviceContext * GetDeviceContext() { return m_context; }private: D3D_FEATURE_LEVEL m_featureLevel; CComPtr m_device; CComPtr m_context; CComPtr m_swapChain; std::unique_ptr m_renderTarget;};class D3DSurface : public Awesomium::Surface {public: D3DSurface(ID3D11DeviceContext * context, Awesomium::WebView * view, int width, int height) : m_context(context), m_view(view), m_width(width), m_height(height) { CComPtr device; context->GetDevice(&device); auto result = device->CreateTexture2D(&CD3D11_TEXTURE2D_DESC(DXGI_FORMAT_B8G8R8A8_UNORM, width, height, 1, 1), nullptr, &m_texture); result = device->CreateShaderResourceView(m_texture, nullptr, &m_textureView); result = device->CreateTexture2D(&CD3D11_TEXTURE2D_DESC(DXGI_FORMAT_B8G8R8A8_UNORM, width, height, 1, 1, 0, D3D11_USAGE_STAGING, D3D11_CPU_ACCESS_READ | D3D11_CPU_ACCESS_WRITE), nullptr, &m_staging); } virtual void Paint(unsigned char *srcBuffer, int srcRowSpan, const Awesomium::Rect &srcRect, const Awesomium::Rect &destRect) { auto box = CD3D11_BOX(destRect.x, destRect.y, 0, destRect.x + destRect.width, destRect.y + destRect.height, 1); D3D11_MAPPED_SUBRESOURCE resource; m_context->Map(m_staging, 0, D3D11_MAP_WRITE, 0, &resource); auto srcStartingOffset = srcRowSpan * srcRect.y + srcRect.x * 4; uint8_t * srcPtr = srcBuffer + srcStartingOffset; auto dstStartingOffset = resource.RowPitch * destRect.y + destRect.x * 4; uint8_t * dataPtr = reinterpret_cast(resource.pData) + dstStartingOffset; for (int i = 0; i < destRect.height; ++i) { memcpy(dataPtr + resource.RowPitch * i, srcPtr + srcRowSpan * i, destRect.width * 4); } m_context->Unmap(m_staging, 0); m_context->CopySubresourceRegion(m_texture, 0, destRect.x, destRect.y, 0, m_staging, 0, &box); } virtual void Scroll(int dx, int dy, const Awesomium::Rect &clipRect) { auto box = CD3D11_BOX(clipRect.x, clipRect.y, 0, clipRect.x + clipRect.width, clipRect.y + clipRect.height, 1); m_context->CopySubresourceRegion(m_texture, 0, clipRect.x + dx, clipRect.y + dy, 0, m_texture, 0, &box); } void Bind() { m_context->PSSetShaderResources(0, 1, &m_textureView.p); } bool IsUIPixel(unsigned x, unsigned y) { D3D11_MAPPED_SUBRESOURCE resource; m_context->Map(m_staging, 0, D3D11_MAP_READ, 0, &resource); auto startingOffset = (m_width * y + x) * 4; uint8_t * dataPtr = reinterpret_cast(resource.pData) + startingOffset; bool result = *dataPtr != 0; m_context->Unmap(m_staging, 0); return result; } virtual ~D3DSurface() { }private: CComPtr m_textureView; CComPtr m_texture; CComPtr m_staging; ID3D11DeviceContext * m_context; Awesomium::WebView * m_view; int m_width; int m_height;};class D3DSurfaceFactory : public Awesomium::SurfaceFactory {public: D3DSurfaceFactory(ID3D11DeviceContext * context) : m_context(context) { } virtual Awesomium::Surface * CreateSurface(Awesomium::WebView * view, int width, int height) { return new D3DSurface(m_context, view, width, height); } virtual void DestroySurface(Awesomium::Surface * surface) { delete surface; }private: ID3D11DeviceContext * m_context;};class MainWindow : public CWindowImpl, public Awesomium::JSMethodHandler {public: DECLARE_WND_CLASS_EX(ClassName, CS_OWNDC | CS_HREDRAW | CS_VREDRAW, COLOR_BACKGROUND + 1); MainWindow(Awesomium::WebCore * webCore) : m_webCore(webCore), m_view(nullptr, [](Awesomium::WebView * ptr) { ptr->Destroy(); }), m_isMaximized(false), m_surface(nullptr) { RECT rect = { 0, 0, 800, 600 }; AdjustWindowRectEx(&rect, GetWndStyle(0), FALSE, GetWndExStyle(0)); Create(nullptr, RECT{ 0, 0, rect.right - rect.left, rect.bottom - rect.top }, WindowName); ShowWindow(SW_SHOW); UpdateWindow(); } void Run() { MSG msg; while (true) { if (PeekMessage(&msg, 0, 0, 0, PM_REMOVE)) { if (msg.message == WM_QUIT) break; TranslateMessage(&msg); DispatchMessage(&msg); } else { Update(); } } } void Update() { auto context = m_device->GetDeviceContext(); m_webCore->Update(); if (m_view->IsLoading()) { m_isLoading = true; } else if (m_isLoading) { m_isLoading = false; UpdateBossHealth(); m_webCore->Update(); m_surface = static_cast(m_view->surface()); } m_device->SetAndClearTarget(); context->OMSetBlendState(m_blendState, nullptr, ~0); context->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST); context->IASetIndexBuffer(nullptr, (DXGI_FORMAT)0, 0); context->IASetVertexBuffers(0, 1, &m_vertexBuffer.p, &Vertex::Stride, &Vertex::Offset); context->IASetInputLayout(m_inputLayout); context->VSSetShader(m_triangleVS, nullptr, 0); context->PSSetShader(m_trianglePS, nullptr, 0); context->Draw(3, 0); context->IASetVertexBuffers(0, 0, nullptr, nullptr, nullptr); context->IASetInputLayout(nullptr); context->VSSetShader(m_vertexShader, nullptr, 0); context->PSSetShader(m_pixelShader, nullptr, 0); context->PSSetSamplers(0, 1, &m_sampler.p); if (m_surface) m_surface->Bind(); context->Draw(3, 0); m_device->Present(); } void OnMethodCall(Awesomium::WebView * caller, unsigned remoteObjectId, Awesomium::WebString const & methodName, Awesomium::JSArray const & args) { JsCallerKey key(remoteObjectId, methodName); auto itor = m_jsFunctions.find(key); if (itor != m_jsFunctions.end()) { itor->second(caller, args); } } Awesomium::JSValue Awesomium::JSMethodHandler::OnMethodCallWithReturnValue(Awesomium::WebView * caller, unsigned remoteObjectId, Awesomium::WebString const & methodName, Awesomium::JSArray const & args) { JsCallerKey key(remoteObjectId, methodName); auto itor = m_jsFunctionsWithRetValue.find(key); if (itor != m_jsFunctionsWithRetValue.end()) { return itor->second(caller, args); } return Awesomium::JSValue(); }private: BEGIN_MSG_MAP(MainWindow) MESSAGE_HANDLER(WM_DESTROY, [](unsigned messageId, WPARAM wParam, LPARAM lParam, BOOL & handled) { PostQuitMessage(0); return 0; }); MESSAGE_HANDLER(WM_CREATE, OnCreate); MESSAGE_HANDLER(WM_SIZE, OnSize); MESSAGE_HANDLER(WM_EXITSIZEMOVE, OnSizeFinish); MESSAGE_HANDLER(WM_KEYUP, OnKeyUp); MESSAGE_HANDLER(WM_MOUSEMOVE, OnMouseMove); MESSAGE_HANDLER(WM_LBUTTONDOWN, OnMouseLButtonDown); MESSAGE_HANDLER(WM_LBUTTONUP, OnMouseLButtonUp); END_MSG_MAP()private: LRESULT OnCreate(unsigned message, WPARAM wParam, LPARAM lParam, BOOL & handled) { try { RECT rect; GetClientRect(&rect); m_device = std::make_unique(m_hWnd, rect.right, rect.bottom); tstring filename(MAX_PATH, 0); GetModuleFileName(GetModuleHandle(nullptr), &filename.front(), filename.length()); filename = filename.substr(0, filename.find_last_of('\\')); SetCurrentDirectory(filename.c_str()); CreateD3DResources(); m_surfaceFactory = std::make_unique(m_device->GetDeviceContext()); m_webCore->set_surface_factory(m_surfaceFactory.get()); CreateWebView(rect.right, rect.bottom); m_device->Resize(rect.right, rect.bottom); } catch (std::runtime_error & ex) { std::cout << ex.what() << std::endl; return -1; } return 0; } LRESULT OnSizeFinish(unsigned message, WPARAM wParam, LPARAM lParam, BOOL & handled) { try { RECT clientRect; GetClientRect(&clientRect); m_device->Resize(clientRect.right, clientRect.bottom); if (m_view->IsLoading()) m_view->Stop(); m_surface = nullptr; CreateWebView(clientRect.right, clientRect.bottom); } catch (std::runtime_error & ex) { std::cout << ex.what() << std::endl; } return 0; } LRESULT OnSize(unsigned message, WPARAM wParam, LPARAM lParam, BOOL & handled) { if (wParam == SIZE_MAXIMIZED) { m_isMaximized = true; return OnSizeFinish(message, wParam, lParam, handled); } else { if (m_isMaximized) { m_isMaximized = false; return OnSizeFinish(message, wParam, lParam, handled); } } return 0; } LRESULT OnKeyUp(unsigned message, WPARAM wParam, LPARAM lParam, BOOL & handled) { if (wParam == 'A' && m_view) { --m_bossHealth; UpdateBossHealth(); } return DefWindowProc(message, wParam, lParam); } LRESULT OnMouseMove(unsigned message, WPARAM wParam, LPARAM lParam, BOOL & handled) { int xPos = GET_X_LPARAM(lParam); int yPos = GET_Y_LPARAM(lParam); if (m_view) { if (wParam && MK_LBUTTON) m_view->InjectMouseDown(Awesomium::kMouseButton_Left); m_view->InjectMouseMove(xPos, yPos); } return 0; } LRESULT OnMouseLButtonDown(unsigned message, WPARAM wParam, LPARAM lParam, BOOL & handled) { if (m_view) { m_view->InjectMouseDown(Awesomium::kMouseButton_Left); } return 0; } LRESULT OnMouseLButtonUp(unsigned message, WPARAM wParam, LPARAM lParam, BOOL & handled) { if (m_view) { m_view->InjectMouseUp(Awesomium::kMouseButton_Left); } return 0; }private: void CreateD3DResources() { auto device = m_device->GetDevice(); std::vector vs(std::istreambuf_iterator(std::ifstream("FullScreenTriangleVS.cso", std::ios_base::in | std::ios_base::binary)), std::istreambuf_iterator()); auto result = device->CreateVertexShader(&vs.front(), vs.size(), nullptr, &m_vertexShader); ThrowIfFailed(result, "Could not create vertex shader."); std::vector ps(std::istreambuf_iterator(std::ifstream("FullScreenTrianglePS.cso", std::ios_base::in | std::ios_base::binary)), std::istreambuf_iterator()); result = device->CreatePixelShader(&ps.front(), ps.size(), nullptr, &m_pixelShader); ThrowIfFailed(result, "Could not create pixel shader."); result = device->CreateSamplerState(&CD3D11_SAMPLER_DESC(CD3D11_DEFAULT()), &m_sampler); ThrowIfFailed(result, "Could not create sampler state."); vs.assign(std::istreambuf_iterator(std::ifstream("TriangleVS.cso", std::ios_base::in | std::ios_base::binary)), std::istreambuf_iterator()); result = device->CreateVertexShader(&vs.front(), vs.size(), nullptr, &m_triangleVS); ThrowIfFailed(result, "Could not create vertex shader."); ps.assign(std::istreambuf_iterator(std::ifstream("TrianglePS.cso", std::ios_base::in | std::ios_base::binary)), std::istreambuf_iterator()); result = device->CreatePixelShader(&ps.front(), ps.size(), nullptr, &m_trianglePS); ThrowIfFailed(result, "Could not create pixel shader."); std::vector inputElementDesc = { { "SV_POSITION", 0, DXGI_FORMAT_R32G32B32A32_FLOAT, 0, 0 }, { "COLOR", 0, DXGI_FORMAT_R32G32B32A32_FLOAT, 0, 4 * sizeof(float) }, }; result = device->CreateInputLayout(&inputElementDesc.front(), inputElementDesc.size(), &vs.front(), vs.size(), &m_inputLayout); ThrowIfFailed(result, "Unable to create input layout."); // Hard coded triangle. Tis a silly idea, but works for the sample. Vertex vertices[] = { { { 0.0f, 0.5f, 0.5f, 1.0f }, { 1.0f, 0.0f, 0.0f, 1.0f }, { 0.5f, 1.0f } }, { { 0.5f, -0.5f, 0.5f, 1.0f }, { 0.0f, 1.0f, 0.0f, 1.0f }, { 0.0f, 0.0f } }, { { -0.5f, -0.5f, 0.5f, 1.0f }, { 0.0f, 0.0f, 1.0f, 1.0f }, { 1.0f, 0.0f } }, }; D3D11_BUFFER_DESC desc = { sizeof(vertices), D3D11_USAGE_DEFAULT, D3D11_BIND_VERTEX_BUFFER }; D3D11_SUBRESOURCE_DATA data = { vertices }; result = device->CreateBuffer(&desc, &data, &m_vertexBuffer); ThrowIfFailed(result, "Failed to create vertex buffer."); D3D11_BLEND_DESC blendDesc; blendDesc.AlphaToCoverageEnable = false; blendDesc.IndependentBlendEnable = false; blendDesc.RenderTarget[0] = { true, D3D11_BLEND_SRC_ALPHA, D3D11_BLEND_INV_SRC_ALPHA, D3D11_BLEND_OP_ADD, D3D11_BLEND_ONE, D3D11_BLEND_ZERO, D3D11_BLEND_OP_ADD, D3D11_COLOR_WRITE_ENABLE_ALL }; device->CreateBlendState(&blendDesc, &m_blendState); } void UpdateBossHealth() { auto javascript = std::string("$('#progressbar').progressbar({ value: ") + std::to_string(m_bossHealth) + "}); "; m_view->ExecuteJavascript(Awesomium::ToWebString(javascript), Awesomium::WSLit("")); } void CreateWebView(int width, int height) { m_view.reset(m_webCore->CreateWebView(width, height, nullptr, Awesomium::kWebViewType_Offscreen)); CreateAndSetJSFunctions(); m_view->SetTransparent(true); Awesomium::WebURL url(Awesomium::WSLit(URL)); m_view->LoadURL(url); } void CreateAndSetJSFunctions() { m_view->set_js_method_handler(this); m_jsApp = m_view->CreateGlobalJavascriptObject(Awesomium::WSLit("app")); Awesomium::JSObject & appObject = m_jsApp.ToObject(); appObject.SetCustomMethod(Awesomium::WSLit("skill"), false); JsCallerKey key(appObject.remote_id(), Awesomium::WSLit("skill")); m_jsFunctions[key] = std::bind(&MainWindow::OnSkill, this, std::placeholders::_1, std::placeholders::_2); } Awesomium::JSValue OnSkill(Awesomium::WebView * view, Awesomium::JSArray const & args) { if (args.size() == 0) return Awesomium::JSValue(); Awesomium::JSValue const & arg = args[0]; if (!arg.IsInteger()) return Awesomium::JSValue(); switch (arg.ToInteger()) { case 1: --m_bossHealth; UpdateBossHealth(); break; default: break; } return Awesomium::JSValue(); }private: typedef std::pair JsCallerKey; typedef std::function JsFunction; std::unique_ptr m_device; std::unique_ptr m_surfaceFactory; std::unique_ptr m_view; Awesomium::WebCore * m_webCore; D3DSurface * m_surface; CComPtr m_pixelShader; CComPtr m_vertexShader; CComPtr m_sampler; CComPtr m_blendState; CComPtr m_vertexBuffer; CComPtr m_trianglePS; CComPtr m_triangleVS; CComPtr m_inputLayout; Awesomium::JSValue m_jsApp; std::map m_jsFunctions; std::map m_jsFunctionsWithRetValue; int m_bossHealth = 100; bool m_isLoading; bool m_isMaximized; private: static const tchar * ClassName; static const tchar * WindowName; static const char * URL;};const tchar * MainWindow::WindowName = _T("DX Window");const tchar * MainWindow::ClassName = _T("GameWindowClass");const char * MainWindow::URL = "file : ///./Resources/UIInterface.html";int main() { Awesomium::WebCore * webCore = Awesomium::WebCore::Initialize(Awesomium::WebConfig()); { MainWindow window(webCore); window.Run(); } Awesomium::WebCore::Shutdown();}
Sign in to follow this  


4 Comments


Recommended Comments

Washu, have you considered putting this code into a GitHub repository?  It would help you maintain it better and give people better access to it if it was versioned and documented via commit logs.  Some of this stuff is really great examples, and I think it would be a worth while resource for the community.

 

What do you think?

Share this comment


Link to comment
I agree, and I will be adding a github link once I'm done setting it up.

Share this comment


Link to comment

Interesting to see that someone else is having to overcome similar problems. 

 

I've recently been working on the filtering of the Input to an Awesomium WebView. 

 

In the API there is a  FocusedElementType Awesomium::WebView::focused_element_type();

 

This returns kFocusedElementType_None when there is no element focused. This seems like a great solution for textboxes but I'm not sure whether it would work for buttons ( Yet to play around with it ).

 

Currently I have implemented a MouseOver, MouseOut JavaScript function that is a callback to my C++ and activates/deactivates the WebView for Input. 

 

This seems to work for the mouse, so I think maybe having my current method for mouse input and then the focus element approach for keyboard input might be a perfect solution.

 

I can't wait to see what solution you come up with :)

Share this comment


Link to comment

Interesting to see that someone else is having to overcome similar problems. 
 
I've recently been working on the filtering of the Input to an Awesomium WebView. 
 
In the API there is a  FocusedElementType Awesomium::WebView::focused_element_type();
 
This returns kFocusedElementType_None when there is no element focused. This seems like a great solution for textboxes but I'm not sure whether it would work for buttons ( Yet to play around with it ).
 
Currently I have implemented a MouseOver, MouseOut JavaScript function that is a callback to my C++ and activates/deactivates the WebView for Input. 
 
This seems to work for the mouse, so I think maybe having my current method for mouse input and then the focus element approach for keyboard input might be a perfect solution.
 
I can't wait to see what solution you come up with smile.png


This actually probably works better than my idea, which is just to test if the pixel is transparent or not. Honestly though, moving over to the map/unmap part would stay the same though, as that's the recommended method for updating dynamic textures, although I would probably just make it a dynamic texture, instead of using the staging texture.

Share this comment


Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!