Jump to content

  • Log In with Google      Sign In   
  • Create Account

DvDmanDT

Member Since 10 Dec 2004
Offline Last Active May 27 2016 05:48 AM

#5270020 What programming languages does support VLAs?

Posted by DvDmanDT on 08 January 2016 - 04:41 AM

IIRC, some C compilers implement VLAs using malloc/free. Not sure where we encountered it, but my guess is some vendor supplied compiler for one of those non-arm microcontroller architectures.




#5269211 Criticism of C++

Posted by DvDmanDT on 04 January 2016 - 10:06 AM

I completely disagree with you on this. smile.png

 

If something is implementation defined, then it's typically platform specific as well, in which case we could just as well use vendor supplied intrinsics or similar. You would retain 100% power while greatly reducing error prone-ness. Design with normal day-to-day usage in mind, not some super rare edge case.

 

I'm not talking about removing capabilities, I'm talking about making error-prone stuff more explicit. IMHO, language features that help me write "correct" code is extremely desired, especially for low level systems code.

 

As for naming standards, I agree that a language should not enforce this, but there could be a de-facto standard that everyone uses (like in C# and Java).




#5269176 Criticism of C++

Posted by DvDmanDT on 04 January 2016 - 06:12 AM

 

Is that what C# (Microsoft) and Objective-C (Apple) and Go (Google) were supposed to be?

Objective-C = "Objective-C is a general-purpose, object-oriented programming language that adds Smalltalk-style messaging to the C programming language. It is the main programming language used by Apple for the OS X and iOS operating systems,"
C# = "C# is intended to be suitable for writing applications for both hosted and embedded systems, ranging from the very large that use sophisticated operating systems, down to the very small having dedicated functions."
Go = "Designed primarily for systems programming, it is a compiled, statically typed language in the tradition of C and C++"

 

 

C# is awesome, but it's mainly for application level software. You can use pointers and stuff in C# and I occasionally do, but the support is very limited and you loose most advantages of the language. A large design goal of C#/.NET is the ability to interface with existing code and we do have C++/CLI, so there's a fairly reasonable migraion path (at least if we can get C++/CLI on other platforms) but the language is still too high level.

 

Objective-C seems closer (never used it) but has unfamiliar syntax and seems to lack certain features that might be required. No idea about migration paths.

 

Go also has unfamiliar syntax and appears to be closer to Ada in philosophy. Kindof a different market than C++. Again no idea about migration paths. I have no experience with it either.

 

What I want is a language with C#-like syntax but for lower level stuff. Not GC-based. Modern compilation model. Less "implementation defined" behavior, either it's ok or it's not. Less relying on preprocessors. Well-defined primitives. A default recommended naming convention. No more declare-before-use except for locals. Maintain the power of C++ while making it less easy to make misstakes. Make common pitfalls more explicit (casting, switch/case fall-through, ...).




#5269110 Criticism of C++

Posted by DvDmanDT on 03 January 2016 - 08:33 PM


 

No, it's there to make sure you don't pay a cost for wasted memory or speed (or even crashes) no matter which platform you are on.

If you want your data to work across platforms, you need to accept the cost and use a library or implement it yourself.

Also, if you really want standard sized data types, they already exist as a part of C99 in the form of int32_t and the like.

 
I know, all real world code I've seen uses those, or custom typedefs like those. I can almost never use 'int' because it might be 16 bits and not be able to hold my data while I can't use long because it might end up being 64 bits and increase my memory usage to the point of stack overflows or whatever.
 

Not sure what you expect. They do exactly what they say on the tin - trade performance for memory. I don't know of any hidden gotchas with them.


#include <iostream>

struct MyStruct
{
	unsigned int field1 : 1;
	char field2 : 7;
};

int main()
{
	std::cout << sizeof(MyStruct); 
	return 0;
}
One could expect 1, or 4, many (most?) compilers will give 8. The bits will not be packed in the same byte like one could expect. It makes a lot of sense when you know why, especially on little endian platforms, but still. It's easy to be confused.

A lot of that carries over from C and must be carried forward to maintain backwards compatibility with mission-critical systems whose companies cannot afford to upgrade.


Exactly. I fully understand it, I just hate it.

It's been repeatedly tried by several people and companies. All have failed because C and C++ is too far entrenched, as stated above. If you cannot provide all of the features of C and C++ and work with all existing C and C++ libraries and upgrade everyone's code for free (or prove that your new magic language is worth the massive cost of a rewrite) - no sale.


There are lots of attempts indeed, but most have no commercial backing and/or no migration path. I think there's a huge desire for an alternative, but it needs heavy backing to break in. Someone needs to actually push it, not just "make it available". I think for example Microsoft could do it, they just don't really stand to gain from it right now.


#5269078 Criticism of C++

Posted by DvDmanDT on 03 January 2016 - 05:46 PM

I have tons of issues with C++ and I'm a true C# fanboy.

 

My biggest issue is the awful compilation model and the whole declare-before-use for anything but locals.

 

Second issue is the whole primitives-have-platform-defined-sizes thing. It's awful and completely breaks portability, which is ironic considering it's exactly what it's there for. I even work with those weird architectures and compilers that could potentially benefit from it, and still no, just no. Give me well defined regular types and then a special 'platform_int' for that extreme edge case where it would make sense, any time.

 

Then there's those bit field structs. They are so not-what-you-expect that several books and college educations describe them all wrong. It's a nice feature in theory, there's just way too much wtf over it.

 

I really dislike various implicit casting and constructing rules. Some of it can be fixed by increasing warnings and turning on warnings as errors. There are several of those I wish had been errors by default though.

 

I wish some heavy weight tool provider (like MS) would create a cleaned up systems level language.




#5267236 trying to think up a new way to level up

Posted by DvDmanDT on 20 December 2015 - 04:48 PM

In FORCED (they spell it with upper casing) you beat levels, where each level had 3 crystals you could unlock. One for completing the level, one for completing it faster than some set time limit and one for completing it while also doing some challenge. A challenge could be something like "don't get hit by a single shockwave attack" or "don't use walls for cover from energy beam" etc.

 

Your "level" (available skill set) was determined by the number of crystals you had unlocked, so if you had a hard time completing a specific level, you could greatly benefit from going back and completing the secondary challenges on previous levels.

 

It suits a very particular set of games I suppose though. 




#5264857 Why didn't somebody tell me?

Posted by DvDmanDT on 04 December 2015 - 05:39 AM

You can also scroll-click to open a new instance of an application in the taskbar. A feature that I'm using a lot more is right-clicking an application in the taskbar to open recent or pinned files/projects though.

 

edit: Also, you can right-click an application in the taskbar and right-click the application name in the lowest section of the popup-menu to start it in admin mode.

 

edit 2: If you are on Windows 8.1 or 10, you can right-click the start button (is it still called that?) to open a menu with various tools and shortcuts. It's also available as Win-X.




#5259726 Release mode crash

Posted by DvDmanDT on 30 October 2015 - 06:30 AM

Just mentioning that if your C++ code accesses C# objects/data through pointers, you need to tell the GC to not move those objects/that data. There are methods for doing that and you can also use fixed(...){} to temporarily "lock" data in a memory position. Not doing so can cause very rare and random crashes that are very hard to track.

 

Edit: This probably also applies to accessing members of C++/CLI reference classes through pointers.




#5256856 C# Garbage Collection and performance/stalls

Posted by DvDmanDT on 12 October 2015 - 08:42 AM


On that note, is anyone here privy to any details or rumblings about plans with Microsoft and Mono going forward? With the CLR opening up, is it reasonable to start seeing some unification going on there, or is Microsoft going to do its own thing on Linux/Mac while Mono goes on its way?

 

Mono 4.x marks the start where Mono is incorporating code from Microsofts opensourced version. From what I understand, Microsoft will continue to do develop their own thing (.NET) and Mono will probably port most of that into its implementation. In other words, they'll remain separate projects but share some code and strive to complement each other for platforms etc. I'm also under the impression that they'll be takning steps towards a unified hosting API, but that's a vague personal interpretation based on non-official statements and mailing list posts.




#5256181 C# Garbage Collection and performance/stalls

Posted by DvDmanDT on 08 October 2015 - 05:07 AM

My experience with C# on MS .NET says you can have short lived objects and long lived objects without much problems, what you'll want to avoid are the mid lifetime objects since those are the ones triggering the heavy collections.

 

With that said, I've had some great success optimizing performance, memory usage and responsiveness by using value types (structs) instead of reference types for bulk objects, even though they are sometimes a bit of a hassle to use.

 

A tip if you have Resharper is to download the Heap allocations plugin. It'll inform you when you are allocating stuff on the heap. For example, when calling methods with variable arguments, when you are using lambdas or when you are boxing value types. Most stuff will be temporary local objects that are basically free, but there are cases where it'll expose serious bottlenecks.




#5236709 Do you comment Above or Below your code?

Posted by DvDmanDT on 25 June 2015 - 06:06 AM

I usually comment above, occasionally alongside and every now and then I do standalone-ish comments that clarify the current state or whatever.




#5221614 Safe endian macro

Posted by DvDmanDT on 06 April 2015 - 08:09 AM

I work a lot with embedded, and in particular files and memory dumps from embedded systems. In my experience, it doesn't matter that much if you are running on one or the other, what matters is if you are working with files/data from the other one. Almost all files/dumps/structures/whatever have some magic number header where we just do something like

 

var val = reader.ReadUInt32();

 

if(val == 0xAABBCCDD)
{
  reader.SwitchEndianess = false;
}
else if(val == 0xDDCCBBAA)
{
  reader.SwitchEndianess = true;
}
else

{

 throw new InvalidDataException();

}

 

I suppose it could matter if you try to write a file with a defined byte order though.

 

[Edit]

Also, many processors have runtime switchable endianess, including most ARMs if I'm not mistaken. :)




#5211524 Reliable UDP messages

Posted by DvDmanDT on 18 February 2015 - 02:17 PM

I'm mostly familiar with Lidgren which is pretty much the one true networking library for .NET. It uses multiple "channels" where some channels are unreliable, others are ordered but may have packets dropped, others are reliable and ordered.

 

It will only resend those that are reliable. The unreliable option allows packets to be received out of order with some packets dropped etc. You can mix and match channels as you see fit for your various data types.




#5210883 how to know most hack possiblities and find best way to handle them

Posted by DvDmanDT on 15 February 2015 - 02:46 PM

 

thank you for answering to me. i have read the code before but need information how it works. as i saw the code, there is no code for encrypting. is the encryption process automatic? does it work like rsa?  what does X509Certificate do? is this for being sure that data is from valid client and... ? ill be gratefull for more information about what you know about ssl.

 

 

 

Yes, the actual data encryption is automatic. It uses RSA and (probably) AES internally.

 

SSL does two things. The most obvious thing is that it encrypts data, but it also has mechanisms to verify peers. For example, when you connect to your bank, you want to make sure not only that the communications are encrypted, but you also want to be sure that it really is your bank that you are talking to. Such verification can be performed using an asymmetric encryption algorithm (such as RSA) and a certificate chain. The whole process is a bit to complex for me to write here, but the point is that some authority who everyone trusts can issue a non-fakeable (in theory at least) certificate to someone which can then be verified by others. The certificate contains the public encryption key to be used when communicating with that entity. The most common format of these certificates is X.509.

 

You can create a self-signed certificate with your own keys. This is typically used for testing or when you only need encryption.

 

The reason for using a trusted certificate system is that it prevents man-in-the-middle attacks where a your client unknowingly connects to a hacker who decrypts the data, reads it, re-encrypts it and passes it on to you. That can also happen with a self-signed certificate unless it's shipped with the client.

 

Correctly doing encryption is hard. smile.png You should probably read up on it on wikipedia or similar.

 

EDIT:

Certificates are most commonly used to verify servers, but they can also be used to verify clients. That could be used for white-listing for example. I'm not sure I've ever seen anything that actually uses client certificates however.




#5210857 how to know most hack possiblities and find best way to handle them

Posted by DvDmanDT on 15 February 2015 - 10:24 AM

RSA is super slow and is typically only used for handshaking and symmetric key exchange, then a symmetric algorithm such as AES is used. This is what SSL does. SSL is used by tons of things, secure web, secure FTP, SSH, current mail protocols, and so on. 
 
If you use a TCP connection and can keep it open, the SSL overhead will probably be fine. It's mostly the handshake that's the problem. Unless you are like an MMO, but then you'll probably need reverse proxies and load balancing anyway. SSL is probably going to be the fastest encryption solution you can find. Doing it yourself will either be less secure or slower.
 
Check out the System.Net.Security.SslStream.






PARTNERS