• Create Account

### #Actualkloffy

Posted 13 September 2012 - 10:38 AM

Another example - if would be like - instead of letting a class (Teacher) walk through a list of children (Students) asking them to perform the same operation (AreYouReadyForTheFeildTrip?) - which each student determine using different methods. Your flawed example would be like the teacher deciding what to ask the student based on first identifying who they are (Tommy or Jane?) and then if Tommy, doing the Tommy specific code, and if Jane doing the Jane specific code. When the whole point of polymorphic is to let Tommy, Jane and Ivan's creator (programmer / parents) make each one unique as they should be - but their Teacher no longer has to understand these differences to interact with them.

I like your example, it makes a lot of sense. I have run into a similar problem in another context. Yet, I am still not sure what the best solution might be. The difficulties arose when I was thinking about how to design a shader system that would allow a lot of reuse and flexibility. Let us consider the following simplified case with the following elements/classes:

Effect - The compiled and linked shader program. It expects a certain set of inputs that have to be bound in order to execute properly. This is kind of important: The requirements in terms of inputs are dictated by the effect.
Material - Contains a set of uniforms that define the appearance of an object. In order to render an object, the material is bound to the effect.
Geometry - Contains attribute buffers that describe the geometry of the object. In order to render an object, the geometry is bound to the effect.
Object - Contains a Material and a Geometry (possibly also an Effect, depends on whether we want to be able to specify different effects for different objects). Can be rendered in some way (Renderer::render(Object object) or something like that).

So far I believe this is fairly standard. Now, in OOP an obvious approach would be to define generic base classes and subclass them. But now, I have exactly the two options that were described, but I am not happy with either one. Let me explain (and I will just focus on the material, because the geometry is more or less analogous):

First Option:
The base classes define an interface like this:

Effect
+ void Bind(Material material)

Material
* Empty *

And subclasses might look like this:

ColorEffect
- int AmbientLocation
- int DiffuseLocation
+ void Load(string file) // base.load and ensure the program takes the required inputs, get AmbientLocation, DiffuseLocation
+ void Bind(Material material) // bind the uniforms to the appropriate locations

ColorMaterial
+ vec3 Ambient
+ vec3 Diffuse

Now, this design that would be considered wrong, I suppose. It requires exactly the kind of dynamic_cast that has been criticized. The Effect::Bind(Material material) method would contain something like this (Pseudocode):

ColorMaterial colorMaterial = dynamic_cast<ColorMaterial>(material);

glUniform(AmbientLocation, colorMaterial.AmbientColor);
glUniform(DiffuseLocation, colorMaterial.DiffuseColor);

What is kind of neat about this is that the Effect knows exactly what it needs and takes it from the material. You could pass a more derived material and it would still work. Even less derived materials could work, however they would need special handling and the effect would provide default values for the missing uniforms.

Second Option:
Base classes look like this:

Effect

Material
+ void BindTo(Effect effect)

And subclasses look like this:

ColorEffect
+ int AmbientLocation
+ int DiffuseLocation

ColorMaterial
+ vec3 Ambient
+ vec3 Diffuse
+ void BindTo(Effect effect)

At first glance, this may look better. However, it still requires the dynamic_cast to get the concrete ColorEffect in the BindTo method. One could argue that it is an improvement to do the binding code in the material, because the effect no longer has to differentiate between different materials. But you lose some of the good properties of the first variant. In order to make more derived materials bind to less derived effects, you have to introduce special cases in the BindTo method.

So, yeah, this is probably a flawed design, but I find it difficult to come up with a better solution without sacrificing flexibility. Maybe my mistake is that I am trying to be too generic (base classes are empty/almost empty). Essentially, it would be nice to have a system where you can change the effect without changing the material, and everything still works as long as the material provides the necessary inputs to the effect.

I think this post is getting pretty long, I'm curious to hear your thoughts.

### #13kloffy

Posted 13 September 2012 - 10:37 AM

Another example - if would be like - instead of letting a class (Teacher) walk through a list of children (Students) asking them to perform the same operation (AreYouReadyForTheFeildTrip?) - which each student determine using different methods. Your flawed example would be like the teacher deciding what to ask the student based on first identifying who they are (Tommy or Jane?) and then if Tommy, doing the Tommy specific code, and if Jane doing the Jane specific code. When the whole point of polymorphic is to let Tommy, Jane and Ivan's creator (programmer / parents) make each one unique as they should be - but their Teacher no longer has to understand these differences to interact with them.

I like your example, it makes a lot of sense. I have run into a similar problem in another context. Yet, I am still not sure what the best solution might be. The difficulties arose when I was thinking about how to design a shader system that would allow a lot of reuse and flexibility. Let us consider the following simplified case with the following elements/classes:

Effect - The compiled and linked shader program. It expects a certain set of inputs that have to be bound in order to execute properly. This is kind of important: The requirements in terms of inputs are dictated by the effect.
Material - Contains a set of uniforms that define the appearance of an object. In order to render an object, the material is bound to the effect.
Geometry - Contains attribute buffers that describe the geometry of the object. In order to render an object, the geometry is bound to the effect.
Object - Contains a Material and a Geometry (possibly also an Effect, depends on whether we want to be able to specify different effects for different objects). Can be rendered in some way (Renderer::render(Object object) or something like that).

So far I believe this is fairly standard. Now, in OOP an obvious approach would be to define generic base classes and subclass them. But now, I have exactly the two options that were described, but I am not happy with either one. Let me explain (and I will just focus on the material, because the geometry is more or less analogous):

First Option:
The base classes define an interface like this:

Effect
+ void Bind(Material material)

Material
* Empty *

And subclasses might look like this:

ColorEffect
- int AmbientLocation
- int DiffuseLocation
+ void Load(string file) // base.load and ensure the program takes the required inputs, get AmbientLocation, DiffuseLocation
+ void Bind(Material material) // bind the uniforms to the appropriate locations

ColorMaterial
+ vec3 Ambient
+ vec3 Diffuse

Now, this design that would be considered wrong, I suppose. It requires exactly the kind of dynamic_cast that has been criticized. The Effect::Bind(Material material) method would contain something like this (Pseudocode):

ColorMaterial colorMaterial = dynamic_cast<ColorMaterial>(material);

glUniform(AmbientLocation, colorMaterial.AmbientColor);
glUniform(DiffuseLocation, colorMaterial.DiffuseColor);

What is kind of neat about this is that the Effect knows exactly what it needs and takes it from the material. You could pass a more derived material and it would still work. Even less derived materials could work, however they would need special handling and the effect would provide default values for the missing uniforms.

Second Option:
Base classes look like this:

Effect

Material
+ void BindTo(Effect effect)

And subclasses look like this:

ColorEffect
+ int AmbientLocation
+ int DiffuseLocation

ColorMaterial
+ vec3 Ambient
+ vec3 Diffuse
+ void BindTo(Effect effect)

At first glance, this may look better. However, it still requires the dynamic_cast to get the concrete ColorEffect in the BindTo method. One could argue that it is an improvement to do the binding code in the material, because the effect no longer has to differentiate between different materials. But you loose some of the good properties of the first variant. In order to make more derived materials bind to less derived effects, you have to introduce special cases in the BindTo method.

So, yeah, this is probably a flawed design, but I find it difficult to come up with a better solution without sacrificing flexibility. Maybe my mistake is that I am trying to be too generic (base classes are empty/almost empty). Essentially, it would be nice to have a system where you can change the effect without changing the material, and everything still works as long as the material provides the necessary inputs to the effect.

I think this post is getting pretty long, I'm curious to hear your thoughts.

### #12kloffy

Posted 13 September 2012 - 06:01 AM

Another example - if would be like - instead of letting a class (Teacher) walk through a list of children (Students) asking them to perform the same operation (AreYouReadyForTheFeildTrip?) - which each student determine using different methods. Your flawed example would be like the teacher deciding what to ask the student based on first identifying who they are (Tommy or Jane?) and then if Tommy, doing the Tommy specific code, and if Jane doing the Jane specific code. When the whole point of polymorphic is to let Tommy, Jane and Ivan's creator (programmer / parents) make each one unique as they should be - but their Teacher no longer has to understand these differences to interact with them.

I like your example, it makes a lot of sense. I have run into a similar problem in another context. Yet, I am still not sure what the best solution might be. The difficulties arose when I was thinking about how to design a shader system that would allow a lot of reuse and flexibility. Let us consider the following simplified case with the following elements/classes:

Effect - The compiled and linked shader program. It expects a certain set of inputs that have to be bound in order to execute properly. This is kind of important: The requirements in terms of inputs are dictated by the effect.
Material - Contains a set of uniforms that define the appearance of an object. In order to render an object, the material is bound to the effect.
Geometry - Contains attribute buffers that describe the geometry of the object. In order to render an object, the geometry is bound to the effect.
Object - Contains a Material and a Geometry (possibly also an Effect, depends on whether we want to be able to specify different effects for different objects). Can be rendered in some way (Renderer::render(Object object) or something like that).

So far I believe this is fairly standard. Now, in OOP an obvious approach would be to define generic base classes and subclass them. But now, I have exactly the two options that were described, but I am not happy with either one. Let me explain (and I will just focus on the material, because the geometry is more or less analogous):

First Option:
The base classes define an interface like this:

Effect
+ void Bind(Material material)

Material
* Empty *

And subclasses might look like this:

ColorEffect
- int AmbientLocation
- int DiffuseLocation
+ void Load(string file) // base.load and ensure the program takes the required inputs, get AmbientLocation, DiffuseLocation
+ void Bind(Material material) // bind the uniforms to the appropriate locations

ColorMaterial
+ vec3 Ambient
+ vec3 Diffuse

Now, this design that would be considered wrong, I suppose. It requires exactly the kind of dynamic_cast that has been criticized. The Effect::Bind(Material material) method would contain something like this (Pseudocode):

ColorMaterial colorMaterial = dynamic_cast<ColorMaterial>(material);

glUniform(AmbientLocation, colorMaterial.AmbientColor);
glUniform(DiffuseLocation, colorMaterial.DiffuseColor);

What is kind of neat about this is that the Effect knows exactly what it needs and takes it from the material. You could pass a more derived material and it would still work. Even less derived materials could work, however they would need special handling and the effect would provide default values for the missing uniforms.

Second Option:
Base classes look like this:

Effect

Material
+ void BindTo(Effect effect)

And subclasses look like this:

ColorEffect
+ int AmbientLocation
+ int DiffuseLocation

ColorMaterial
+ vec3 Ambient
+ vec3 Diffuse
+ void BindTo(Effect effect)

At first glance, this may look better. However, it still requires the dynamic_cast to get the concrete ColorEffect in the BindTo method. However, one could argue that it is an improvement to do the binding code in the material, because the effect no longer has to differentiate between different materials. But you loose some of the good properties of the first variant. In order to make more derived materials bind to less derived effects, you have to introduce special cases in the BindTo method.

So, yeah, this is probably a flawed design, but I find it difficult to come up with a better solution without sacrificing flexibility. Maybe my mistake is that I am trying to be too generic (base classes are empty/almost empty). Essentially, it would be nice to have a system where you can change the effect without changing the material, and everything still works as long as the material provides the necessary inputs to the effect.

I think this post is getting pretty long, I'm curious to hear your thoughts.

### #11kloffy

Posted 13 September 2012 - 05:58 AM

Another example - if would be like - instead of letting a class (Teacher) walk through a list of children (Students) asking them to perform the same operation (AreYouReadyForTheFeildTrip?) - which each student determine using different methods. Your flawed example would be like the teacher deciding what to ask the student based on first identifying who they are (Tommy or Jane?) and then if Tommy, doing the Tommy specific code, and if Jane doing the Jane specific code. When the whole point of polymorphic is to let Tommy, Jane and Ivan's creator (programmer / parents) make each one unique as they should be - but their Teacher no longer has to understand these differences to interact with them.

I like your example, it makes a lot of sense. I have run into a similar problem in another context. Yet, I am still not sure what the best solution might be. The difficulties arose when I was thinking about how to design a shader system that would allow a lot of reuse and flexibility. Let us consider the following simplified case with the following elements/classes:

Effect - The compiled and linked shader program. It expects a certain set of inputs that have to be bound in order to execute properly. This is kind of important: The requirements in terms of inputs are dictated by the effect.
Material - Contains a set of uniforms that define the appearance of an object. In order to render an object, the material is bound to the effect.
Geometry - Contains attribute buffers that describe the geometry of the object. In order to render an object, the geometry is bound to the effect.
Object - Contains a Material and a Geometry (possibly also an Effect, depends on whether we want to be able to specify different effects for different objects). Can be rendered in some way (Renderer::render(Object object) or something like that).

So far I believe this is fairly standard. Now, in OOP an obvious approach would be to define generic base classes and subclass them. But now, I have exactly the two options that were described, but I am not happy with either one. Let me explain (and I will just focus on the material, because the geometry is more or less analogous):

First Option:
The base classes define an interface like this:

Effect
+ void Bind(Material material)

Material
* Empty *

And subclasses might look like this:

ColorEffect
- int AmbientLocation
- int DiffuseLocation
+ void Load(string file) // base.load and ensure the program takes the required inputs, get AmbientLocation, DiffuseLocation
+ void Bind(Material material) // bind the uniforms to the appropriate locations

ColorMaterial
+ vec3 Ambient
+ vec3 Diffuse

Now, this design that would be considered wrong, I suppose. It requires exactly the kind of dynamic_cast that has been criticized. The Effect::Bind(Material material) method would contain something like this (Pseudocode):

ColorMaterial colorMaterial = dynamic_cast<ColorMaterial>(material);

glUniform(AmbientLocation, colorMaterial.AmbientColor);
glUniform(DiffuseLocation, colorMaterial.DiffuseColor);

What is kind of neat about this is that the Effect knows exactly what it needs and takes it from the material. You could pass a more derived material and it would still work. Even less derived materials could work, however they would need special handling and the effect would provide default values for the missing uniforms.

Second Option:
Base classes look like this:

Effect

Material
+ void BindTo(Effect effect)

And subclasses look like this:

ColorEffect
+ int AmbientLocation
+ int DiffuseLocation

ColorMaterial
+ vec3 Ambient
+ vec3 Diffuse
+ void BindTo(Effect effect)

At first glance, this may look better. However, it still requires the dynamic_cast to get the concrete ColorEffect in the BindTo method. However, one could argue that it is an improvement to do the binding code in the material, because the effect no longer has to differentiate between different materials. But you loose some of the good properties of the first variant. In order to make more derived materials bind to less derived effects, you have to introduce special cases in the BindTo method.

So, yeah, this is probably a flawed design, but I find it difficult to come up with a better solution without sacrificing flexibility. Maybe my mistake is that I am trying to be too generic (base classes are empty/almost empty). Essentially, it would be nice to have a system where you can change the effect without changing the material, and everything still works as long as the material provides the necessary inputs to the effect.

I think this post is getting pretty long, I'm curious to hear your thoughts.

### #10kloffy

Posted 13 September 2012 - 05:57 AM

Another example - if would be like - instead of letting a class (Teacher) walk through a list of children (Students) asking them to perform the same operation (AreYouReadyForTheFeildTrip?) - which each student determine using different methods. Your flawed example would be like the teacher deciding what to ask the student based on first identifying who they are (Tommy or Jane?) and then if Tommy, doing the Tommy specific code, and if Jane doing the Jane specific code. When the whole point of polymorphic is to let Tommy, Jane and Ivan's creator (programmer / parents) make each one unique as they should be - but their Teacher no longer has to understand these differences to interact with them.

I like your example, it makes a lot of sense. I have run into a similar problem in another context. Yet, I am still not sure what the best solution might be. The difficulties arose when I was thinking about how to design a shader system that would allow a lot of reuse and flexibility. Let us consider the following simplified case with the following elements/classes:

Effect - The compiled and linked shader program. It expects a certain set of inputs that have to be bound in order to execute properly. This is kind of important: The requirements are dictated by the effect.
Material - Contains a set of uniforms that define the appearance of an object. In order to render an object, the material is bound to the effect.
Geometry - Contains attribute buffers that describe the geometry of the object. In order to render an object, the geometry is bound to the effect.
Object - Contains a Material and a Geometry (possibly also an Effect, depends on whether we want to be able to specify different effects for different objects). Can be rendered in some way (Renderer::render(Object object) or something like that).

So far I believe this is fairly standard. Now, in OOP an obvious approach would be to define generic base classes and subclass them. But now, I have exactly the two options that were described, but I am not happy with either one. Let me explain (and I will just focus on the material, because the geometry is more or less analogous):

First Option:
The base classes define an interface like this:

Effect
+ void Bind(Material material)

Material
* Empty *

And subclasses might look like this:

ColorEffect
- int AmbientLocation
- int DiffuseLocation
+ void Load(string file) // base.load and ensure the program takes the required inputs, get AmbientLocation, DiffuseLocation
+ void Bind(Material material) // bind the uniforms to the appropriate locations

ColorMaterial
+ vec3 Ambient
+ vec3 Diffuse

Now, this design that would be considered wrong, I suppose. It requires exactly the kind of dynamic_cast that has been criticized. The Effect::Bind(Material material) method would contain something like this (Pseudocode):

ColorMaterial colorMaterial = dynamic_cast<ColorMaterial>(material);

glUniform(AmbientLocation, colorMaterial.AmbientColor);
glUniform(DiffuseLocation, colorMaterial.DiffuseColor);

What is kind of neat about this is that the Effect knows exactly what it needs and takes it from the material. You could pass a more derived material and it would still work. Even less derived materials could work, however they would need special handling and the effect would provide default values for the missing uniforms.

Second Option:
Base classes look like this:

Effect

Material
+ void BindTo(Effect effect)

And subclasses look like this:

ColorEffect
+ int AmbientLocation
+ int DiffuseLocation

ColorMaterial
+ vec3 Ambient
+ vec3 Diffuse
+ void BindTo(Effect effect)

At first glance, this may look better. However, it still requires the dynamic_cast to get the concrete ColorEffect in the BindTo method. However, one could argue that it is an improvement to do the binding code in the material, because the effect no longer has to differentiate between different materials. But you loose some of the good properties of the first variant. In order to make more derived materials bind to less derived effects, you have to introduce special cases in the BindTo method.

So, yeah, this is probably a flawed design, but I find it difficult to come up with a better solution without sacrificing flexibility. Maybe my mistake is that I am trying to be too generic (base classes are empty/almost empty). Essentially, it would be nice to have a system where you can change the effect without changing the material, and everything still works as long as the material provides the necessary inputs to the effect.

I think this post is getting pretty long, I'm curious to hear your thoughts.

### #9kloffy

Posted 13 September 2012 - 05:56 AM

Another example - if would be like - instead of letting a class (Teacher) walk through a list of children (Students) asking them to perform the same operation (AreYouReadyForTheFeildTrip?) - which each student determine using different methods. Your flawed example would be like the teacher deciding what to ask the student based on first identifying who they are (Tommy or Jane?) and then if Tommy, doing the Tommy specific code, and if Jane doing the Jane specific code. When the whole point of polymorphic is to let Tommy, Jane and Ivan's creator (programmer / parents) make each one unique as they should be - but their Teacher no longer has to understand these differences to interact with them.

I like your example, it makes a lot of sense. I have run into a similar problem in another context. Yet, I am still not sure what the best solution might be. The difficulties arose when I was thinking about how to design a shader system that would allow a lot of reuse and flexibility. Let us consider the following simplified case with the following elements/classes:

Effect - The compiled and linked shader program. It expects a certain set of inputs that have to be bound in order to execute properly. This is kind of important: The requirements are dictated by the effect.
Material - Contains a set of uniforms that define the appearance of an object. In order to render an object, the material is bound to the effect.
Geometry - Contains attribute buffers that describe the geometry of the object. In order to render an object, the geometry is bound to the effect.
Object - Contains a Material and a Geometry (possibly also an Effect, depends on whether we want to be able to specify different effects for different objects). Can be rendered in some way (Renderer::render(Object object) or something like that).

So far I believe this is fairly standard. Now, in OOP an obvious approach would be to define generic base classes and subclass them. But now, I have exactly the two options that were described, but I am not happy with either one. Let me explain (and I will just focus on the material, because the geometry is more or less analogous):

First Option:
The base classes define an interface like this:

Effect
+ void Bind(Material material)

Material
* Empty *

And subclasses might look like this:

ColorEffect
- int AmbientLocation
- int DiffuseLocation
+ void Load(string file) // base.load and ensure the program takes the required inputs, get AmbientLocation, DiffuseLocation
+ void Bind(Material material) // bind the uniforms to the appropriate locations

ColorMaterial
+ vec3 Ambient
+ vec3 Diffuse

Now, this design that would be considered wrong, I suppose. It requires exactly the kind of dynamic_cast that has been criticized. The Effect::Bind(Material material) method would contain something like this (Pseudocode):

ColorMaterial colorMaterial = dynamic_cast<ColorMaterial>(material);

glUniform(AmbientLocation, colorMaterial.AmbientColor);
glUniform(DiffuseLocation, colorMaterial.DiffuseColor);

What is kind of neat about this is that the Effect knows exactly what it needs and takes it from the material. You could pass a more derived material and it would still work. Even less derived materials could work, however they would need special handling and the effect would provide default values for the missing uniforms.

Second Option:
Base classes look like this:

Effect

Material
+ void BindTo(Effect effect)

And subclasses look like this:

ColorEffect
- int AmbientLocation
- int DiffuseLocation

ColorMaterial
+ vec3 Ambient
+ vec3 Diffuse
+ void BindTo(Effect effect)

At first glance, this may look better. However, it still requires the dynamic_cast to get the concrete ColorEffect in the BindTo method. However, one could argue that it is an improvement to do the binding code in the material, because the effect no longer has to differentiate between different materials. But you loose some of the good properties of the first variant. In order to make more derived materials bind to less derived effects, you have to introduce special cases in the BindTo method.

So, yeah, this is probably a flawed design, but I find it difficult to come up with a better solution without sacrificing flexibility. Maybe my mistake is that I am trying to be too generic (base classes are empty/almost empty). Essentially, it would be nice to have a system where you can change the effect without changing the material, and everything still works as long as the material provides the necessary inputs to the effect.

I think this post is getting pretty long, I'm curious to hear your thoughts.

PARTNERS