I don't usually use this space to share thoughs that don't concern my projects, but this time I'll make an exception. I'd like to share some thoughts on the recent trend of wanting to use the RUST language everywhere and the initiative by the United States government to push for mandatory use of the RUST language. The big slogan behind is memory safety. That's because the RUST language uses several build-time strategies to avoid erroneous access to previously unallocated or deallocated memory areas. I've used the RUST language in the past to program some very complex things, including a multithreaded paced transport protocol, so I know exactly what I'm talking about. The idea of making RUST a mandatory language is nothing short of insane and impractical, and I'll explain why, providing some valid arguments.
Author Archives: admin
TextureMind Framework – Insights #4 – Use of VMware to increase compatibility
From November 11 2024, VMware Fusion and VMware Workstation are now available for free to everyone for commercial, educational, and personal users. These are very good news because now I can use VMware to improve the compatibility of TextureMind Framework and all the derived software with different kind of Windows versions and Linux distros. Currently, the framework has an experimental porting for Linux which is compatible only with Ubuntu24 (you can already download DWorkSim 0.2 and test it). I will setup different virtual machines to extend the compatibility to the most relevant Linux distros in the market, like Debian, Rhel, Rocky, Centos, Sles and Gentoo, following this chart:
With virtual machines available, it will be alot easier than before. I already have a strategy to increase the range of supported distros for the framework and the derived software. Basically, I will create builds with old distros to cover also the latest ones (thanks to backward compatibility). The aim is to make it possible to download a single zip with the software and make it run without installing anything else. The virtual machines will be used to extend the support for different versions of Windows as well.
TextureMind and all its activities will never be sold to any company
This is the biggest claim that you should be looking for. In a world where computer projects are created almost with the purpose of being sold in the future to big companies like Microsoft, Google, Amazon, Meta, I declare that I will never sell TextureMind Framework and all the products to any company that might be interested in purchasing them in the future. This work is getting better and better and in less than 2 months it will become my real full time job. I will never sell my business, not even for 100 billion dollars. No big company will ever make of my work its personal playground. The software created using the framework will always be mine alone and will be distributed and possibly sold by me. It will never become the half-mutilated service of a large company or the whim of some billionaire. The project will always be the same and will remain faithful to its original intentions. This should interest you a lot, because this way my work will never lose its integrity and will not be corrupted by the needs of making money typical of billion-dollar companies.
If you like my works and intend to donate, know that this money will never be lost for the creation of a work that will betray you in the future just to make money. If you donate, you will help the creation of one of the few survived works in the world that are made for passion and not for money.
There will be some big news coming soon. On January 6, 2025, the first working version of TextureMind Desktop, a remote display protocol, will be released. With this software, you will be able to connect to a remote workstation and play games or run 3D applications at 60 fps, for free. I'm also working on a bunch of other software, including a GUI editor, a 2D scene editor, a 3D scene editor, a full development environment, a paint application for creating textures and images with AI, a video editing application and more. So what are you waiting for? Help to make this dream a reality. Go to TextureMind YouTube Channel and subscribe. Go to this page and donate, if you want. Help to make the difference. Thanks.
TextureMind Framework – Progress #26 – Remote display protocol
Ladies and gentlemen, I'm glad to announce that TextureMind Framework finally has its own remote display protocol. The protocol has been used to create another software that is called TMD (TextureMind Desktop). The software is composed by two applications: a server running on the host machine and a client application which connects to the server. The client application exposes a simple graphics interface to make the user access and interact with the desktop of the host machine.
In the first version, you can already view the desktops screen, interact with the mouse and keyboard, with full support for Windows and Linux. Among the main features, an old-school programming with the least number of external dependencies, optimized for high performance with the least possible use of hardware resources. Furthermore, the extreme portability make it easy to use as a client / server ping-pong tool like iperf. Another strong point: it never crashes. Robust programming as well as debugging with static and dynamic code analysis have made it possible to stream video continuously for days in a row without ever crashing and a really tiny amount of memory leaks due mostly to the way memory works. Not bad for having been developed entirely by one person in a month.
Currently I'm working to improve the protocol for message transport. I'm building reliable transport on top of UDP. It will be possible to stream on both TCP and UDP protocols. After that, I will improve video codecs by adding multithreaded jpeg compression and av1 encoder / decoder with ffmpeg. I will try also to support hardware encoding and decoding, at least on Nvidia and AMD gpus. It will be possible to play games in 4K resolution at 60 fps in your LAN. For free. TMD will have its own website (ASAP) where it will be possible to find news, documentation, screenshots and downloads.
TextureMind Desktop – Release date 6 January 2025
Planned release date for TMD (TextureMind Desktop) is 6 January 2025. The software will be like VNC but studied to deliver performance, high resolution with high framerate. It will be composed by two applications: the server and the viewer. The usage will be simple: launch the server on the host machine and use the viewer for connecting to the server. There will be support for display, input and audio, on Windows and Linux. Hopefully hardware encoding and ssl. It will be free, so you can test it as much as you want.
TMD will be mostly oriented for high responsiveness scenarios, like videogames, but also professional applications like CADs or text editors. There will be two streaming modes: Best quality vs. Best framerate. Best quality will try to maximize the quality, reducing the framerate in case of network congestion. On the contrary, best framerate will try to keep constant high framerate reducing quality, always in case of network congestion. Moreover, you will be able to try other client-side settings to maximize your experience. It will be possible to change the codec used for video streaming in real-time, enable / disable quality updates, select the codec for quality updates, enable / disable hardware decoding, limit network bandwidth and so on.
TMD will have its own web site, with pages for documentation, screenshots, download area and support. I will open a freshdesk site, so you can open a ticket for the issues you will find during your tests.
TextureMind Framework – Insights #3 – Basic principles
- Few dependencies. Most modern applications are gigantic bundles of third-party dependencies. While the adage “avoid reinventing the wheel” may be true, it is also true that including myriads of external projects into a single one only complicates build maintenance, increasing build times and portability to other platforms. To avoid this, I set myself the goal of using as few external dependencies as possible, even at the cost of reinventing many little wheels within a single huge consistent project. As reward, the entire framework takes only 3 minutes and a half to build in release mode, while other similar projects may take hours.
- Robustness. Everything in the framework has been programmed from scratch to guarantee robustness in the basic operations, like memory management, containers, data structures, parsing and serialization. The framework has been used to create other applications for years and the basic functionalities have been stress-tested with billions of iterations. The serialization format used to save data never crashes even if you replace bytes with salt and pepper noise.
- Consistency. The framework has been entirely coded in C++ with the less number of external libraries possible. Not even the Standard Template Library has been used, to prefer a custom implementation with an higher degree of control. A modular architecture has been used to avoid monoliths and to create plugins: multiple executables in the same project can share code with dynamic libraries. On Windows, the CRT have been included in the build because the software architecture allows it, so you don't have to install Microsoft redistributables to make it work, You can run even in a clean Windows installation without having to install anything else.
- Simplicity. Complexity exists only when something is done badly or is poorly done. The entire framework was created with the goal of being simple and making life easier for those who use it. Everything must be about simplicity, from programming to projects building, installation, application usage and configuration.
- High performance. All the algorithms and solutions have been designed for working with the highest performance, without having to rely on expensive hardware.
- Low resources. The applications created with the framework must work at best with the minimum requirements possible in terms of hardware, memory and CPU usage. What can be executed with the GPU must provide always a via-software alternative to run on GPU-less machines: this is important to reduce the costs if you are dealing with paid virtual machines in a cloud service.
- Scalability. The framework could be used to create a wide range of software, from the pacman clone to client/server applications, interposition libraries or games with unreal engine 5 graphics. It can build on a wide range of platforms with different requirements, from the oldest to the latest operating systems. For example, an application could start on Windows 7 but also on Windows 11, or maybe it may have a build for Raspberry PI or, even more extremely, for AmigaOS 4.1.
- Hard-work. The entire framework has been developed over the years by one person (me), making difficult choices that would only pay off after a huge amount of effort and without ever compromising or taking shortcuts. This has paid off a lot over time, and will probably be the key to success for the entire project in the future, given that the modern trend tends in the opposite direction.
- Passion. The framework is not born with the purpose of making money, building fame and popularity, but with the purpose of creating something beautiful for the pure pleasure of doing it. Programming is not seen as a mere tool to achieve a goal but as a pure form of art, without giving up the highest standards of modern computer science.
- No greed for money. The framework will also be used for commercial projects but not for the greed of making money. If it works well, otherwise so be it. No big company will ever be able to get their hands on this project to make it their playground, so it will always be true to its basic principles. This project will never use clickbait, advertizing or clever gimmicks to attract users' attention: if it gets noticed it will only be thanks to the quality of the results obtained.
TextureMind Framework – Insights #2 – Still in a floppy disk!
In the past, the framework was so small that could fit into a floppy disk, with an average application size of 250 KB. After 13 years, the frawework is way larger and it has some external dependency that makes things worse, but it's nothing compared to modern applications. Let me say that the average size is 8 MB along with 2D / 3D engines, Vulkan / Cairo wrappers and the entire GUI system. However, during my latest analysis I noticed that an application with the common library alone is only 450 KB, which is still capable of fitting into a floppy disk. Not bad, considering that today the total disk space required by an application can be larger than 100 MB or 1 GB. With the common library, you still have the full set of containers, networking, multi-threading, ipc, log files, compression and files management.
The core module contains all the code required to handle objects, containers of objects, plugins and serialization, and it's required by the other modules. An application with the core module has an average size of 3.5 MB. I think it can be improved with some optimizations. With some efforts, it can be reduced to 2 MB. From my experiments, I can produce easily a reduced version of the core library which still makes other module works, with an average application size of 600 KB. It would be beautiful for the framework to create a self-contained application with modern graphics and audio which is still under 1.44 MB (a double density floppy disk). I really like the idea. I think I will put some work into this direction, in particular to make demos and games in the future.
TextureMind Framework – Insights #1 – Dropping DevIL
I have a huge external dependency to support lot of image formats, like jpg, png, bmp, but also other weird stuff, like paint shop pro and doom textures. This dependency has been discontinued from 2018, it has lot of other external dependencies (most of them are obsolete as well), the API is old and the build system is tricky to maintain. The library has LGPL license, so I can only dynamically link the dll file. Recently, I tried DWorkSim on a fresh installation of Windows 10 Pro and it didn't work because DevIL had issues with the missing CRT files. To fix the issue, I should install the old visual studio redistributables or rebuild the dependency. I don't like it.
To facilitate things, I decided to drop the DevIL libraries for integrating built-in dependency-free image libraries in the framework, like stb from nothings.org, which is compact, functional, well proven and public domain. Of couse I won't have anymore support for lot of weird (and abandoned) image formats, but who cares. I will keep basic support for loading (and writing) bmp, png, tga, jpg, gif, pic, pgm, hdr in my picture module. Additionally, more image formats will be supported through external plugin modules with smaller and sustainable external dependencies, like turbo-jpeg, open-jpeg and libavif. This will facilitate alot the build process and the porting into other operating systems. I already implemented the image import/export part with stb and it works like a charm. Soon I will drop DevIL definitely, so the build will be lighter and it will cover more systems without drawbacks or additional stuff to install.
Lighter is better!
TextureMind Framework – Progress #25 – Porting on Linux
Finally I ported my entire framework on Linux. From now, all the applications that I will create for Windows will be supported for Linux too. It took a really great deal of work to achieve this. It was necessary to improve the management of external dependencies, do some builds, implement a meson-based build system for the frame and a build deployment system based on python scripts. I had to fix literally hundreds of thousands of errors and resolve several technical issues related to the differences between the two operating systems. The first build of just the common library filled a log of 3.5 million lines of errors.
I had to analyze the errors and fix them one by one. After the common library, I had to repeat the same work for all the other components, but fortunately the errors were already much fewer. I had to implement the missing parts for Linux, including all the window and event handling. I also had to refactor the whole part for font management and the way screens are presented in the window, especially the part in Vulkan. It was a lot of work, even if it took a little more than 2 weeks, I wrote more than 170 commits. Looking at the results on the video, I'd say it was definitely worth it. In the future, I will port my framework and all my works for other operating systems. The next I want to support is MacOS. I have also in my list: Raspberry PI 5, Windows on Arm, Android, iOS and Amiga OS 4.1.
TextureMind Framework – Progress #24 – Preliminary work for Remoting Protocol
Currently I'm working at the development of the TMD (TextureMind Desktop) remoting protocol. I want to reach a point where it will be possible to connect to an host machine as soon as possible. I took the chance to improve the framework architecture implementing an entire plugin system. Now it's possible to add functionalities through external modules. I added YUV conversion with I420, YV12, NV12 and NV21 pixel formats. I improved the image format so now it's possible to serialize multi-planes YUV images. I made all the rgba pixel formats endianness agnostic. I wrote an abstraction layer for video compression and I'm writing a plugin module with ffmpeg libraries to support h.264 and av1 compression. I already implemented the transport layer with support for TCP where it's possible to send messages between processes through the network with my serialization system. Now I have to implement the communication channels for the remote session, in particular display and input channels. I designed the entire architecture, where I think I made lot of improvements compared to classic remote protocols. There will be an abstraction layer to capture or control not only the desktop but also applications. Imagine to write an application for a virtual museum and you want to have multiple users connected into it: you can deliver the application in the form of SaaS. Then a channel may have two modes of communication: per client or broadcast. Just to make a valid example, the display channel is typically per client and the input channel is broadcast, but you can also extend the display channel for streaming in broadcast. Finally, I designed a smart adaptive frame rate control to avoid jittering. The first version of TMD will support only software video encoding / decoding, but future versions will be optimized for supporting the GPU, in particular NVIDIA with NvEnc and AMD with Amf.
DWorkSim v0.2
DWorkSim stands for Deterministic Workflow Simulator. It's a software created by me with my TextureMind Framework for testing image transmission in remoting protocols, like Microsoft Remote Desktop or AWS DCV. Frames are identified by the number in the top-left corner. Frames with the same number will have always the same pixels, even if they are generated in different moments, on different machines.
DWorkSim is a good alternative to raw images for instantaneous PSNR estimation during the transmission of the animation sequence. It's probably the only software in the world that allows you to do that, because the other softwares don't have support for frames generation and PSNR estimation. You can make also all the estimation you can make comparing two images, like image coherence, blurrines, pixel accuracy, color distortion and text readability. DWorkSim is now version 0.2. It can generate images to test GPU and CPU. The GPU test is different than the test for the CPU, for being more appropriate. The GPU test is performed with Vulkan libraries while the test with the CPU with the Cairo libraries. The GPU test contains two shaders taken from shadertoy:
- Full Spectrum Cyber: https://www.shadertoy.com/view/XcXXzS
- Where the River Goes: https://www.shadertoy.com/view/Xl2XRW
DWorkSim is Freeware for now, so you can use it for testing your software. For the download, visit:
DWorkSim – Deterministic Workflow Simulator
TextureMind Framework – Progress #23 – 2D geometries
I refactored all the geometry module to eliminate redundancy and to introduce new classes for advanced functionality. I re-designed my shape 2D model, for being lighter and more compatible with current standards. For example, now it's possible to create a 2D path which is compatible with SVG format. I implemented a simple parser of SVG path, so it's possible to pass directly the string to the path 2D object, then it will be converted to custom format.
In the picture above you can see an SVG path rendered with my 2D engine and Cairo libraries. I added also 2D geometric primitives, such as circles, ellipses, arcs, rounded rectangles and splines. These elements are now also part of the 2D engine, so they are automatically converted and drawn by the graphics device. I added also two component to the geometry module, which are ShapeGenerator and CollisionDetector. ShapeGenerator is a component to generate 2D shapes from few arguments and the operation between other shapes. It will be used to convert 2D paths to bezier curves, to lines or polygons, or to generate complex shapes from a math model. The CollisionDetector component will be used for collision detection between simple shapes, like points, lines, rectangles, circles and polygons. The next step is to extend these modules for the 3D world, implementing 3D geometries, collision detection and generation for lathe and extrusion objects.
TextureMind Desktop (future project)
TextureMind Desktop (TMD) will be a software to allow remote access to a personal computer's desktop (the host machine) from a client device. It will be composed by a server running on the host machine and a client application which connects to the server. The application will expose a simple UI to make the user access and interact with the desktop of the host machine.
Why TextureMind Desktop?
There are already several softwares of this type, like VNC, RDP, TGX, PCoIP, Citrix, RGS and DCV. So what's the purpose of having TMD? The project TMD was born from TextureMind Framework, which is an entire C++ development environment for two-dimensional and three-dimensional applications, complete not only with the basic features for managing multithreading, inter-process communication, networking, dynamic modules, plugins, compression, serialization, but also 2D and 3D graphics, a proprietary GUI system, an entire 3D engine, a materials system comparable to that of Unreal Engine 5, internal computer vision's architecture and 2D / 3D audio system compatible with Dolby Surround 7.1 (but potentially atmos). TextureMind Framework doesn't use external dependencies but only graphics / audio libraries and some other import / export library, like devIL, ffmpeg, turbo-jpeg and assimp. TextureMind Framework is super consistent, self contained and easy to port to any operating system, the same goes for any product derived from the framework. TMD will benefit from the framework consistency and all the additional features not implemented in other remoting protocols.
Best features:
- Optimized for high framerate and low CPU usage
- GPU encoders (i.e. NvENC and AMF)
- Multiple monitors with 4K resolutions
- Smart client / server frame rate autotuning
- A large set of image (jpg, bmp, png...) and compression formats (zip, lz4, lzo...)
TextureMind Framework – Progress #22 – OpenAL Soft
In parallel with GUI Editor, I developed the audio part of my framework. I wrote an abstraction layer to handle the AudioContext and I created an implementation of it with OpenAL Soft libraries (but it can be implemented with any other audio library). I created also an abstraction layer to handle audio tracks and sounds. Now it's possible to load a sample from ogg vorbis format and play it with the audio context. The context can play many sounds at the same time, handling the queue of sounds in flight. A sound can be consumed until the execution is terminated or played in a loop and stopped later. The audio context automatically manages the audio that are playing on the background and the others that are about to terminate. It's possible to group more sounds under the same slot to manage master volume, frequency and other parameters. For example, in a video game it would be useful to play sound effects in a slot and the music track in another slot, with a separate volume. The conversion between audio models is managed automatically. You can load a dolby surround 7.1 audio track and play it in a stereo system, and vice versa. Finally, you can also play sounds in a virtual reality, where the listener is positioned in the 3D space. The next step will be to add DSP real-time effects and to load / play music from mod file format.
Two big announcements
My paternity leave is about to finish. I took advantage from this period of pause to continue TextureMind Framework, in my spare time (programming is my lifelong passion). I wrote about 800 commits and finished most of the parts that were incomplete. I refined the resources system, supporting dynamic resources and content based optimizations. I finalized binary serialization and file formats. I made it 100% resilient to errors and corruption, so it can pass any fuzz test or existing app sec criteria. I supported system clipboard, and now it's possible to copy & paste anything between processes: text, images, audio tracks, 3D models, anything. I implemented a complete audio engine with OpenAL Soft. I completed the GUI system and created a professional GUI Editor that is almost finished. I started to support networking, so it's already possible to implement client / server applications, exchaging messages between processes with all the complexity of my serialization system, with my cross platform backward / forward compatible binary formats. Now it's getting serious.
I have two big announcements. The first is that 2024 will be my last year of work at NICE / AWS as developer of NICE DCV software. I will no longer be able to work for Amazon because according to company logic (Return To Office) it's not possible to develop remotely and I cannot move from my city. For now, I'm in a sort of "grace time" period, but soon it won't be possible to work anymore for the company and I will leave, I think for the end of the year. The second announcement is that when I'll leave my job, I will dedicate myself to TextureMind activity and I will make it my real job. Many applications and projects will be done from this point onward, it will be the beginning of a new era. Among the many things I intend to do, I am working on my own desktop remotization protocol, obviously totally original and reprogrammed from scratch with my TextureMind Framework. It will be called TMD (TextureMind Desktop). I have many years of experience as an employee developer of this type of software and I am perfectly capable of creating my own software starting from scratch. The software will not only be a desktop remotization protocol, but also the virtualization protocol for my applications because I want to release some of them in the form of saas (software as a service). Everything will be developed in C++ with optimizations, high performance and the minimum amount of external dependencies. It will be great. Soon I will publish some video to show my latest progresses.
If you are interested in my projects and want to start supporting my future business, you can make a donation to:
https://www.texturemind.com/donate/
Thanks.
300 commits in 1 month!
I'm closing the month of April with 300 commits, everything in just 1 month. I wrote most of the code for the Gui Editor application and all the missing audio part, with support for dolby surround and 3D environment.
TextureMind Framework – Progress #21 – Gui Editor
When the application will be complete, it will be forked internally to create another application called "IdealMind". With IdealMind, it will be possible to manage resources, create 2D and 3D scenes and design new applications with a programming language. The main target is to create a development environment for quick and easy prototyping of new applications, without sacrificing power and stability.
OutVideoWork – Work in progress
This application is designed for creating professional visual effects and for non-linear video montage. It will make use of ffmpeg library for the video part and my framework for graphics part, in particular the chance of programming effects with GPU shaders, creating animations with the advanced animation system or ray-tracing rendering.
ShaderMind – Work in progress
ShaderMind will be designed for the automatic creation of shaders compatible with shadertoy and other environment. It will be possible to copy & paste shaders from shadertoy (or others) and see them work on ShaderMind application in no time, so the new shaders can be saved for the future and re-used for other projects. In the same way, you can design new shaders and export them to shadertoy without additional efforts. The shaders will be designed with a 3D modeller editor, where the scene is previewed with my engine and my material system: when finished, the entire scene can be converted into one or more program shaders compatible with shadertoy. The resources can be converted into the glsl source code to include in the shader program, with some limitation. For example, a small texture or a small 3D object can be converted into source code but not large textures or huge 3D models. The scene will be composed basically by signed distance functions rendered with a simplified material system. Complex resources like bezier outlines for 2D or 3D text rendering will be converted into source code as well.
MultiEdge Paint – Work in progress
MultiEdge Paint will be a professional software to paint artistic images and for seamless textures generation. It will make use of generative AI to create new images or modify existing images, with the latest features like txt-to-img, inpainting and outpainting, with the chance to setup the environment locally (not a remote service, it will require a powerful GPU) and download the checkpoints directly from civitai.com . It is called "multi-edge" to resemble the horizontal, vertical and temporal loop, but basically it can loop in multiple "edges" of the images, handling multiple layers and maps at the same time.
IdealMind – Work in progress
Originally TextureMind Framework IDE, IdealMind is designed to manage resources (like textures, materials, 3D models) and to create applications with TextureMind Framework's functionalities. The term "Ideal" stands for: Integrated Development Environment Application Level, but it's also nice to associate it with the word "Mind", which is part of the TextureMind logo.
With IdealMind, it will be possible to create any kind of application, from a simple window with a rectangle inside of it to an entire 3D game. You can create new projects, add resources like textures, materials, audio, design new graphics interfaces, handle input events, render 3D models, execute scripts and so on. It can be used also to create tests and to handle graphics workflows.
TextureMind Framework – Progress #20 – GUI is complete!
It's done! Taking advantage of the extra time I have now, I finished the GUI associated with my TextureMind framework. Compared to the previous update, I added Frame Windows, Tree Views, Tab Strips, Tool Bars and Docking Layouts. I fixed lot of bugs, improved the management of resources and events, optimized the algorithms to draw on screen and so on. One of the big improvements is in the menagement of resources. I introduced the concept of resources package. A project and its internal components may have a resources package associated on them which contains materials, textures and templates. The resources in the project is considered the main package of resources, but a single component may store an independent package of resources used for rendering itself. When the component is exported, it may save internally only the resources used for its rendering. When the component is imported, the internal resources are merged in the resources of the main project: if the GUIDs are the same, the component's resoures are recycled, otherwise the component's resources are added into the main project.
I implemented also a totally dynamic model for the loading and initialization of the resources. Previously, all the resources were loaded, stored in memory and converted into GPU memory at once. Now, only the resources used for the rendering between a number of consecutive frames is loaded, stored and converted. Additionally, you can specify a number of preloaded resources associated to a component: the resources will be kept alive only for lifespan of the component. In this way, it's virtually possible to explore terabytes of data stored somewhere, as it is in modern engines like Unreal Engine, or optimize for hardwares where the RAM is limited. Now I started to work on the GUI editor, which is at good point of development as well. The GUI editor will be used to design new interfaces and also to create new skins, with any kind of appearance. The entire GUI system is designed for applications and video games, so it must be robust for a wide range of use cases.
DWorkSim – Deterministic Workflow Simulator
This application was created with the aim of generating an animated scene where the frames are always the same. Each frame is associated with a number, so it is possible to capture individual frames and compare them, to see if there are any differences if the frames were manipulated by a second application, such as remote display protocol. Furthermore, the workflow is generated without excessive computational cost, so it can be used to calculate the performance of the remote display protocol, without excessive impact on the entire system. This software is freeware, so you can use it for free, even for commercial products ( see that you cannot distribute this software, sell it, change author's name or modify the content: read the EULA document in the zip file for more information).
How to use it
Download the right version for your operating system, unzip and use it. You can launch DWorkSim from command line.
On Windows:
> DWorkSim.exe
> DWorkSim.exe --software-rendering
> DWorkSim.exe --software-rendering --force-60-fps
> DWorkSim.exe --frame-target 357 --output test357.jpg
> DWorkSim.exe --software-rendering --frame-target 357 --output test357.jpg
> DWorkSim.exe -h
> DWorkSim.exe --about
> DWorkSim.exe --version
On Linux:
$ ./DWorkSim
$ ./DWorkSim --software-rendering
$ ./DWorkSim --software-rendering --force-60-fps
$ ./DWorkSim --frame-target 357 --output test357.jpg
$ ./DWorkSim --software-rendering --frame-target 357 --output test357.jpg
$ ./DWorkSim -h
$ ./DWorkSim --about
$ ./DWorkSim --version
You may need to install Cairo libraries for software rendering:
$ sudo apt install libcairo2-dev
For testing on machines without a GPU, you need to add the --software-rendering argument. It will run a totally different animation more adapt for software rendering without dropping performance. The framerate for software rendering is 30 fps by default, but you can increment it to 60 fps with the --force-60-fps argument. You can generate a single frame with --frame-target argument plus the number of the frame to render, and save the result on image file with --output argument followed by the name of the image file. The file format is decided by the file extension, for example image.jpg will save a jpeg image. You can save images in bmp, tga, jpg and png formats. Image in jpeg format are compressed with a quality of 80, while the other formats are lossless.
TextureMind Framework – Progress #19 – ListViews, Menus and ComboBoxes
I recently focused my efforts on the development of some widgets for displaying lists of items, in particular: listviews, menus and combo boxes. Listviews are optimized to handle hundreds of thousands of items without causing the application to slow down too much. Listviews can be configured to draw colors corresponding to different events, such as when the mouse passes over an item, when it's clicked, selected or out of focus. It is possible to select a single item or a range of items by holding down the SHIFT key, or add or subtract a single item by holding down the CTRL key.
When one or more items are selected, an event is generated by a callback system. Another interesting innovation concerns the menus. I used listviews to implement an entire drop-down menu system with a rather classic setup. Menu boxes can be opened and closed in real time. When an item is clicked, it is then reported to the application via the same callback system used for the listviews. It should be remembered that this GUI is drawn inside the screen, so these widgets will never come out of the application window but they have to adapt dynamically to best fit the available space. After implementing the menu it was possible to implement combobox widgets as well. Once you click on the combobox, a drop-down menu will open and the click will change the selected item within the box, generating an event. As for the menus, also in this case the drop-down menus will best occupy the space available on the screen during the window resize. The positioning of the drop-down menu will also take into account any geometric transformation applied to the widget that is opening the menu, repositioning it in the best way.
So I can say that the GUI is almost finished. There are a couple of missing widgets but they are not so complicated to implement. I don't think I will create other videos about GUI updates because I will focus on other features. When completed, this GUI will be used to create application and video games. The next step is to implement an editor to create GUI pages that can be used inside the applications.
PC Breathless Demo #1
I'm glad to release the first demo of PC Breathless that you can try on your own PC desktop or laptop. The minimum requirements are Windows, GPU card and Vulkan libraries installed.
You can turn around your view with the mouse and set forward with the up/down cursors, eventually go up and down with the left / right mouse buttons.
This demo is not a great evolution of the demo already published on youtube. The big difference is in the clean up and the fact that you can safely run in your pc without issues (be sure to have vulkan libraries installed). If you like this project and you want that it will be continued in the future, please make a donation to support my work. It will be appreciated.
For now, you can see the third stage with no collision detection applied. In the future demos, I will apply an actual walk with collision detection, and the missing code to open the doors. As you can see, the graphics aspect is very simple without any lights applied, sky, characters or animated textures. That's because the 3D engine is still under development. As next step, I will introduce a new lighting system with PBR. I do not exclude that I will implement also a ray-tracing renderer in the future.
Old framework’s GUI
This demo shows up the GUI system of my old project "Unrelated Framework". This GUI has been programmed from scratch and makes use of both via software and GPU rendering. In this case, the demo makes use of OpenGL for the rendering. The GUI implements the most important widgets, like buttons, text boxes, form windows, tab strips, list views and menus. In this demo, you can find a some window in a desktop background, with tabs and text boxes. You can minimize and maximize the windows, click on the option buttons, write some text and test the popup menu. The rotating images belong to deferred contexts which are drawn into drawables that are presented inside the GUI's windows.
Currently, I'm re-implementing all the GUI into my new project "TextureMind Framework", which is more advanced and accurate than in the past. For instance, in this demo the text is drawn with bitmap fonts precalculated inside one or more textures, while in the modern TextureMind framework all the fonts are automatically generated by the engine. In particular, the new framework can use both bitmap and outline fonts, if supported by the graphics device (currently, I'm working to implement path rendering and outline fonts in Vulkan). Another difference with the past is the material system: in the past, everything was made to be rendered with simple image patterns, while in the modern engine I have implemented a full and complex material system made by visual expression nodes and program shaders (where available). However, the old demo has been programmed 15 years ago, which is amazing considering the complessity of the features that I'm still trying to port in the modern framework.
Frame rate adaptation with linear interpolation
When you create a game, you need to consider the target framerate of your application: it will affect the entire dynamic of the physics engine and all the choices that will make the game playable. However, given the big heterogeneity of modern graphics cards with different types of monitors, the target frame rate may not be available on all configurations. It's therefore necessary that the target frame rate is scaled adequately to make the graphics equally fluid and to not affect the gameplay dynamics.
The demo presents one of the existing solutions for solving this type of problem, based on the frame adaptation with linear interpolation for the trajectory correction. In practice, the application frame rate is the vsync frame rate, while the target frame rate is used to calculate deltaT. A linear interpolation of the trajectories is used to bridge the gap between the target frame rate and the vsync frame rate, generating frames that do not exist to reach the vsync frame rate. As consequence, the F1 option is smoother than the F2 option.
Tetris solver V1.0
If you like tetris, you are going to enjoy this demo. Basically, it is a bot to solve tetris that I programmed many years ago. In this case, the only technique considered by the AI to put pieces into the line is the "hard drop", so horizonal movers and T.spin are not supported.
You can interact with the interface and decide to change the decision. If you keep the "space" key pressed, the AI will solve the tetris in your place, trying to not be defeated by the level complexity. You can also have fun trying to complicate the scenario, the AI will try to solve the level in the best way possible. This demo exists because in the past I wanted to implement a bot (like GemFinder) to automatically solve tetris games. Then, I was occupied with my current job and I abandoned it.
Mandelbrot navigator
Old isometric engine
This is an old demo that I created in the past during the development of an isometric engine for a PC porting of the snes game Equinox. The project was commissioned to me by a person that wanted to port this particular game to the pc world and then he abandoned it for lack of money. The graphics look very similar to the snes title, I remember that I did a pretty good job with the porting.
This demo was also part of an old framework to create video games that I developed in the past. The engine doesn't make use of 3D rendering or GPU, it's entirely via software. There was an original idea implemented to handle isometric occlusion with the simple blit algorithm, without implementing any variant (infact, the algorithm could work with simple BitBlt function call). As you can see, you can play only two rooms with collision detection, you can run, jump and hit enemies. Even if you have only two rooms, the engine was capable to handle the entire game. It's a shame that the project has been abandoned.
TextureMind Framework – Progress #18 – AnsiC Parser
A new AnsiC parser was born inside TextureMind Framework. I'm programming this parser from scratch, starting to implement all the standard ISO C89 features. Most importantly, the parser is designed for being extended to any other language, formal and not formal, which is the main reason why I didn't use another existing and well proven parser, like CLang. Another reason is that the parser is designed for being light-weight and anchored to the framework's internal architecture.
I implemented all the main features and now the parser is able to parse a huge variety of AnsiC source codes. I was able to parse also the donut source code and to get the AST from it:
k;double sin() ,cos();main(){float A= 0,B=0,i,j,z[1760];char b[ 1760];printf("\x1b[2J");for(;; ){memset(b,32,1760);memset(z,0,7040) ;for(j=0;6.28>j;j+=0.07)for(i=0;6.28 >i;i+=0.02){float c=sin(i),d=cos(j),e= sin(A),f=sin(j),g=cos(A),h=d+2,D=1/(c* h*e+f*g+5),l=cos (i),m=cos(B),n=s\ in(B),t=c*h*g-f* e;int x=40+30*D* (l*h*m-t*n),y= 12+15*D*(l*h*n +t*m),o=x+80*y, N=8*((f*e-c*d*g )*m-c*d*e-f*g-l *d*n);if(22>y&& y>0&&x>0&&80>x&&D>z[o]){z[o]=D;;;b[o]= ".,-~:;=!*#$@"[N>0?N:0];}}/*#****!!-*/ printf("\x1b[H");for(k=0;1761>k;k++) putchar(k%80?b[k]:10);A+=0.04;B+= 0.02;}}/*****####*******!!=;:~ ~::==!!!**********!!!==::- .,~~;;;========;;;:~-. ..,--------,*/
Pretty cool, isn't it? The source code has the shape of a donut and it can generate a rotating 3D donut in ascii characters. As you can see, it's not a conventional source code, it's packed and full of tricks to reduce the space, so it was a perfect test for my new parser. I was suprised to see that the parser was actually capable of parsing it, in a time of about 1.37 ms. That would be very cool to execute it now! I have in my plans to write an AST interpreter to manipulate the source code, to translate it into another language and to execute it. The AST structure can be used as well to feed a virtual machine with instructions (like LLVM) or for another compiler backend, like TinyC. The main goal is to use this parser for writing applications with a full access to the framework's functionalities.
Another goal is to implement a GLSL parser and manipulate the source code to integrate program shaders into my materials' system. It will be possible to copy & paste shaders from shadertoy into material nodes, to create super complex materials for 3D rendering.
Future plans for TextureMind Framework and satellite projects
I'm very excited to announce all the future plans for TextureMind Framework. First of all, let me tell you that I'm focusing all my energy on the development of the framework. I continued to develop for months now, without stopping. I'm trying to keep it consistent and reach important targets in no time. In the previous months, I refactored the entire system of class interfaces, serialization, graphics devices and the graphics user interface, implementing new widgets.
New language Parser
Now that everything looks perfect, I'm focusing my efforts on another important feature: a brand new parser for programming languages. The parser will be capable of parsing a source code with given rules and produce an abstract syntax tree (AST). The AST will be used for many purposes. For instance, there will be the chance to give the AST to a writer and produce again the source code, as well as you can give the AST to an executor component and execute the instructions at runtime. The AST will be used also to feed an executor (like LLVM) and execute the instructions at just-in-time.
GPU / CPU shaders
The AST will be used also to cross compile GLSL shader from shadertoy and make it compatible with my internal material system. In this way, you can copy & paste shaders from shadertoy and make them work directly on my framework without any further readaptation. Multiple GLSL shaders will be merged into a single GLSL / HLSL / SPIR-V shader without the need of ping-pong rendering: buffers will be converted into material nodes and channels will be converted into connections between material nodes. This is one of the most outstanding and absurd features I'm working on. Another feature is the possibility to execute shaders without the GPU, converting GLSL into Ansi C code executed by the CPU at just-in-time with multiple threads running in parallel in the same thread-pool.
TextureMind Framework – Progress #17 – GUI – Advanced TextBoxes
I've continued with the implementation of the framework's proprietary GUI. In this case, I implemented a new widget to handle textboxes. As you can see in the video, this is not just a text box widget to put some text into it, but a complete text editor that can be used for any purposes, from the simple notepad to programming languages.
All the main features are implemented, like unicode support, text editing, scroll-bars, multiline selection and clipboard cut & paste. The aspect of the text box can be personalized, with a background texture and different colors. You can also rotate and apply complex geometric transformation to the window. All the GUI in this video is rendered with Vulkan libraries, but it can also be rendered with the Cairo libraries.
TextureMind Framework – Progress #16 – Modules and object interfaces
After years without updates I'm glad to present probably the most important update so far. In the past, the framework was just a monolith of C++ static libraries that could be enterely or partially included within a project to access various functionalities. On one hand it was good, because it simplified the programming of the framework in its parts, on the other it started to become a problem in terms of modularity and scalability. One of the worst complications was caused by the redundancy of the static library binaries: for example, if I had to create a plugin system, each dynamic library would have to include all the binaries with the functions to manage the various components of the framework.
Future plans for product and projects
I wanted to give you a short update about future plans for the projects related to this web-site.
TextureMind Framework
Currently I'm working to a very long refactor of the framework to improve the internal architecture and fix lot of issues. After that, I will finalize the GUI and extend the support to other operating systems besides Windows, like Linux and MacOS. I will add support for Direct3D and OpenGL, while the via-software device will be extended to Skia library for advanced functionality, like 16-bit per channel. I will improve the management of system fonts at application level, add support for pre-calculated bitmap fonts and path-rendering for GPU devices. I will also improve the 3D engine with PBR (Physically Based Rendering) and real-time raytracing for GeForce RTX series.
Unlimited Holter ECG
There will be efforts to create a device that can record an ECG with 7 leads for days or weeks and that can be used as an external loop recorder. This is absolutely experimental and it will require months to reach a stable version. For now, I was able to monitor my heart activity in real time and record an ECG track. I'm working to implement the loop recorder feature, the holter monitor and all the GUI to handle it from the electronic device. Then I will develop a software for mobile devices (Android / iPhone) to download the ECG records, analyze them and produce valid reports. The software will be available from the respective online playstore. After that, I will design a PCB to move all the electronics component from the bread board to a compact space, then I will design the plastic enclosure.
Unlimited Holter
Today it is difficult to find a device on the market that can be used to intercept cardiac arrhythmias, like ventricular tachycardia, that may occur infrequently but can pose a serious danger. Usually these products record for 60 seconds or at most 5 minutes, or even if they record for 24 hours, they are very expensive and difficult to use.
There are cases where there are no medical indications for implanting a loop recorder and there are no products that can replace it. People often purchase devices with metallic electrodes and poor signal quality, they cannot register arrhythmias as they occur, they are difficult to use, and they cannot function while the battery is charging: in practice they are totally useless for this kind of purpose. At present, there is not a single product on the market at an affordable price that can provide this kind of use.
The aim is to create a device that can be used as an holter and external loop recorder, with the possibility of recording even for months for catching difficult and infrequent arrhythmias like sustained or non-sustained, monomorphic or polymorphic ventricular tachycardia, ventricular fibrillation, supraventricular tachycardia, AV-Block, Atrial fibrillation and so on. In the video, you can see the first working prototype with 7-leads. As soon as possible, it will be integrated into a PCB for reducing the size. In the future, it will be available in the gift shop so you can have it.
Great news!
Finally I decided to clean up this web site and take a clear decision about it. This website won't be used anymore to publish tutorials, reviews, knowledge, thoughs, articles and so on, but only "projects and products" related to the web site's activity. I removed all the useless categories and simplified alot.
Now you can register an account into this web site, login and leave comments. You can donate to support projects and products on this web site. The software will be really fantastic, so stay tuned because you will see some good ones. Bye!
TextureMind Framework – Vulkan renderer showcase
I created a video to show the potential of the Vulkan renderer included in my framework. Now the engine can import 3D models from several 3D formats (including collada) and render them with Vulkan libraries.
The engine is equipped with a proprietary material system and mesh format. The materials of the imported models are converted to the engine's materials format which is then converted into the shaders that are used to make them work. As you can see from the video, the engine is already capable of rendering millions of triangles at high framerate and resolution (3840x2160).
TextureMind Framework – Progress #15 – Vulkan – Bitmap Text Rendering
Now the engine in Vulkan can draw text with bitmap rendering. The engine checks which characters are on the screen and it creates dynamically the textures only for the fonts to be drawn, with the correct size and aspect.
The characters are precalculated into bitmap with the FreeType library and loaded into Vulkan textures only if needed. When the text is not rendered, the font is deallocated to reserve space for other resources. In this way, fonts can be drawn as normal polygons with textures, without an important impact on performance. The algorithm for text rendering is capable of drawing text with different indentation formats, including the "justified" one that you see in this video, like in any other word processor. The GUI is drawn with the GPU and it can be used for video games or 3D applications which require advanced performance and functionality.
Amiga Breathless for PC
This project is about the porting of Amiga Breathless for PC with modern GPU hardware. If you like this project and you want that it will be continued in the future, please make a donation to support my work. It will be appreciated.
Downloads
Demo #1
This is the first demo of Breathless for PC. In this demo, you can turn around your view with the mouse and set forward with the up/down cursors, eventually go up and down with the left / right mouse buttons. You can fly around the third stage with no collision detection applied. In the future demos, I will implement an actual walk with collision detection, and the missing code to open the doors.
As you can see, the graphics aspect is very simple without any lights applied, sky, characters or animated textures. That's because the 3D engine is still under development. As next step, I will introduce a new lighting system with PBR. I do not exclude that I will implement also a ray-tracing renderer in the future.
Background
The remake of Breathless was an old dream of when I was 14, along other titles like Gloom, Alien Breed 3D, Super Stardust, Mario 64 and so on. This work is part of another bigger project that is called TextureMind Framework, developed to facilitate the creation of these kind of projects, like games, demos, presentations and so on. The idea to make a remake of Breathless in this moment started when I saw that the 3D engine with Vulkan was stable enough to render any kind of map imported with AssImp library. So why don't import Breathless map and use the same 3D engine to render them?
I was amazed by the idea of rendering Amiga Breathless with Vulkan libraries. The biggest obstacle was to import the maps from the old format in GLD, that was more suitable for raycasting than polygonal rendering. I started downloading all the material from Aminet, with this link:
http://aminet.net/package/game/shoot/Breathless-1996-Source
Then I studied the format from the original source code in C, in particular the map editor. The first important obstacle was that the editor was programmed for loading the map only in uncompressed format and all the maps were compressed in an unknown format called SLZ. I got 0 results on Google. I didn't know the format and I was about to abandon the project.
TextureMind Framework – Progress #14 – Vulkan – Skinned mesh
Skinned mesh rendering is a fundamental part of every modern 3D engine, so I couldn't avoid to implement it. The skinned mesh with weights, indices, bones, skeleton, and animated nodes is imported with AssImp library into my format. I added weights and indices to the vertex attributes while bones matrices are written into a shader storage object. The skinned mesh is computed with the GPU, by the vertex shader.
In the video you can see the final result of the implementation. The model has been imported from Doom 3 format into my format, then animated and rendered by the 3D engine. For now, the keys with quaternions are interpolated with a slerp for every frames. An optimization can be to pre-calculate all the bones into a SSBO at fixed frame rate (like 60 fps) and use it to render a massive amount of meshes.
TextureMind Framework – Progress #13 – Vulkan – Advanced materials
I improved the material system, introducing the lighting stage. I removed the fragment stage and replaced it with color and lighting stage. The output color is calculated as the sum of the color stage and the lighting stage. The color stage has only one material node as input that is used to produce the output color for this stage.
The lighting stage wants more inputs, like ambient, diffuse (or albedo), specular, roughness, metalness, that are mixed in a physical based rendering (PBR). Each input is connected to one material node that can be the result of the operation between more material nodes, so every stage can have it's own textures or the math operation between more textures, uniforms and constants. In the video you can see a model with advanced materials imported to show the benefits of the last optimizations. In this case, the ambient stage is rendered correctly and mixed with the diffuse textures.
TextureMind Framework – Progress #12 – Vulkan – Import materials and normal maps
Now the importer with AssImp library is capable to import model materials and textures into my format. I added also support for normal maps with tangent and bitangent vertex attributes, improving the lighting stage in the fragment shader to render it properly.
In the video you can see nano suit model imported from collada format. As the object rotates, you can see the benefits of bump mapping and specular textures.
TextureMind Framework – Progress #11 – Vulkan – Import 3D model
I decided to use AssImp library to import models from other formats to my 3D mesh format. The video shows a first implementation of the importer.
Vertices and normals are converted along the skeleton structure, while the red material is generated just to render the model on the screen. The next step is to load the materials and the associated textures.
TextureMind Framework – Progress #10 – Vulkan – Materials and 3D rendering
Finally, the very first 3D model rendered by the 3D engine. Even if it looks like a simple torus demo, the main feature this time is the format used for the 3D mesh and the convertion from material nodes to vulkan shader, for the rendering.
The mesh is composed by a polygon hull, a set of vertex attributes and a layout that defines the nature of vertex attributes. The polygon hull represents the geometric structure of the mesh while vertex attributes define the graphics and the physical aspect. A mesh can have virtually any number of vertex attributes, that can be: position, normal, colors, texcoords and other new attributes used by the material.
Materials are composed by expression nodes, then converted to shaders in a second step. Every material has a layout with the number of vertex attributes required for the rendering. The material structure used to render this model is the following:
The layout of the mesh doesn't have to match exactly with the material's one: if the mesh has the required vertex attribute then it's used, otherwise 0 values are used instead. It's for the material to decide how to use the vertex attributes offered by the mesh. In this way, a single material can be used to render any kind of mesh. Of course, a mesh without normals cannot render diffuse or specular, or without texcoords cannot render textures, normal maps and so on.
Uniform buffers can be used by a single mesh to change the material content, like colors or texture coords. For instance, the color of diffuse in this material can be connected to a uniform contained by a 3D mesh, that can be changed on the fly, changing the color of the object. In this way, it's possible to reuse the same materials for multiple objects, even with different aspects, like particles or game characters.
TextureMind Framework – Progress #9 – Vulkan – Materials and textures
I improved the implementation of materials and textures with Vulkan. Now every material is translated into a GLSL shader that is converted into SPIRV code with shaderc library. The shader is generated along the graphics pipeline to match the material settings. For now materials are very simple and used to draw an image texture with alpha blending or a filled color.
As you can see from the video, now the gui has normal appearence instead of rainbow rectangles of before. The next step is to support path rendering and font rendering, for drawing the text. In the future, the same materials system will be used to draw 3D content too.
TextureMind Framework – Progress #8 – Graphics context and 2D GUI with Vulkan
I am happy to announce that Vulkan library has finally been integrated into my framework. For the moment nothing complicated, I limited myself to implement a specialization of the Graphics Context that draws simple colored rectangles instead of the images drawn by Cairo library. It's possible to invoke drawing commands with the same degree of complexity and practically identical management of textures, materials and uniforms, at programming interface level.
Each rectangle is associated with a transformation matrix, which is translated into a uniform buffer. It's also possible to rationalize the rendering into multiple layers, allowing the reuse of command buffers with a minimum programming effort.
As you can see from the above image, the 2D GUI based on the graphics context worked quite well. It's possible to drag the windows and see them move on the screen at a high framerate, which is the main purpose for which it's worth bothering the Vulkan libraries.
For the moment there is an implementation of textures and materials, but I have not yet finished the rendering part at shader level. The difficulty lies in the fact that the framework must resolve the material nodes to extract the proper GLSL shader to be converted into SPIRV, create a suitable graphics pipeline and set it before rendering. The next step is to finish this part and make the 2D GUI identical to Cairo version.
Then I can proceed implementing 3D functionality, with a full material management. The main goal is to implement an importer with assimp library and load 3D models. Then I will proceed refining the 3D functionality with a sophiticated engine optimized for modern real-time computer graphics.
TextureMind Framework – Progress #7 – 2D GUI with Cairo
Finally I came to a first working version of the 2D GUI based on the Cairo libraries. The entire GUI architecture is based on 2D Engine components like the graphics and the physics engines. The graphics engine makes use of graphics context that in this implementation is based on Cairo, but it can be specialized with any library.
As you can see in the video, I reused some old skin from WindowsXP, but the skin is totally programmable and it will be changed in the future. For now there are only simple widgets like: form windows, buttons, options and check boxes. The next step is to implement other composed widgets like scroll bars, text boxes, tabs, lists, treeviews and so on. This GUI can be used for video games or to produce professional applications. The GUI is designed to run on full screen or using the widgets of the operating system. The full screen variant can be specialized to work with GPU libraries, like Direct3D or Vulkan. As a modern feature, a transform matrix can be applied to every widget, so they can be translated, rotated, scaled or skewed with matrix operations. The interface can be designed with an external editor and not with code embedded inside the application. The only code required on the application side is the one used to manage the widget events.
TextureMind Framework – Progress #6 – 2D Engine and assets
Having a graphics context to draw something on the screen is not enough when you have to deal with complex scenes made of many textures, materials, shapes and assets of any kind. This is the reason why at some point of my framework development I introduced the concepts of Scene, Engine and Resources. Basically, a scene is a collection of elements, that can be 2D or 3D objects like shapes or meshes, while the Engine is a component used to handle the scene and resources is a set of textures, materials and assets. All these kind of resources are referenced by elements with UUID strings.
I implemented different kind of Engines. The 'Generic' Engine is used to pre-process the scene to prepare it for the rendering or eventually for other kind of operation, like collision detection. When the generic engine iterates over the scene, all its internal geometries are transformed for being placed on the screen. The 'Graphics' Engine translates the transformed scene into a series of draw commands for the graphics context. The picture of above shows a simple test of the Engine, with an element that is a 2D shape composed by three sub-paths (1 contour and 2 holes), with a radial texture material for fill and a color material for the external stroke. Even if this test is simple, the Engine is designed to handle far more complex scenes and it will be used to create a whole 2D GUI from scratch.
TextureMind Framework – Progress #5 – Materials and path rendering
In my framework, I implemented materials for being extremely scalable. First of all, I decided to abandon the old format similar to 3D Studio Max or Maxon Cinema 4D and adopt another format more similar to UE4 that is based on Visual Expression Nodes, where one node in this case is called "material component".
A material is composed by different stages: displacement, fragment, blend and radiance. Every stage has parameters and a single component in input, that can be a texture with texture coords, diffusion with lights and normals or the combination of more components with "add" or "multiply" nodes.
If program shaders are supported by the graphics context specialization, the material is translated into a program shader, otherwise it will be rendered as best as possible, with the component types supported by the graphics library. Continue reading
TextureMind Framework – Progress #4 – Windows and Cairo graphics context
I implemented a set of classed to handle system windows and events. Now it's possible to open a window and draw an image inside. I also programmed an abstract class for graphics context to handle graphics functionality in common with the most important graphics libraries, like DirectX, OpenGL and Vulkan, even if the first specialization of the context is making use of Cairo library to support via software rendering.
The abstraction layer makes the context compatible to the feature available from the graphics library that is specializing it. For example, Cairo has support for linear and radial patterns and path rendering, but no other patterns can be programmed with program shaders. If not supported by the library, some features is returne as not-supported by an enum function exposed by the abstract class. In this way, the component that is using the rendering context is aware of the features that are available and can make the best use of them. The image shown by the example, is a demo written with the specialized class that makes use of Cairo library, with linear pattern and path rendering.
TextureMind Framework – Progress #3 – Graphics context and external libraries
One of the most important component in a framework is a cross-platform loader of dynamic libraries. Without it, you cannot access to the functionality of external dynamic libraries like OpenGL, DirectX or Vulkan, or at least you may have to add extra code for every library on every platform you have to support. In some cases it's better to not statically link a dynamic library and use LoadLibrary() or dlopen() instead. With this component, I don't have to worry how the library is linked and what platform or operating system I'm about to support, the effort of loading and linking an external library is very little. After that, I decided to use this component to dynamically link DevIL and implement a full support of image conversions with this library. I implemented also a full set of classes to handle 2D shapes and 3D objects.
Another fundamental component for every 2D or 3D engine is the graphics context. In my framework, a graphics context is an abstraction layer of functionality exposed by the rendering context of a graphics library, like OpenGL or Direct3D. Once I defined a full set of draw commands for drawing 2D shapes and 3D objects, I made a first specialization of this interface using the Cairo library with path rendering for drawing 2D graphics only.
TextureMind Framework – Progress #2 – Improve serialization and math classes
Even this framework has been designed for generic purposes, it will be used to program basically graphics applications. In this perspective, I implemented a full set of serializable classes to handle complex numbers, vectors and matrices and all the geometric operations that will be used to realize a 3D engine.
To serialize some enum variables that want constants instead of numbers, I introduced "constant strings" (i.e. LEFT, GREATER, NULL) in human readable formats like xml or json. In this case, when the variable is deserialized by the framework, a constant string will be translated into his respective numberic value, on the contrary the numberic value will be translated into his constant string during the serialization process.
For instance, an extended vector 2D with anchor variables:
enum PositionAnchorEnum { TMD_POSITION_ANCHOR_LEFT = 0, TMD_POSITION_ANCHOR_RIGHT = 1, TMD_POSITION_ANCHOR_TOP = 2, TMD_POSITION_ANCHOR_BOTTOM = 3, TMD_POSITION_ANCHOR_NEAR = 4, TMD_POSITION_ANCHOR_FAR = 5 }; template <class T> class ExtVector2 : public Vector2<T> { public: [...] T m_x: T m_y; PositionAnchorEnum m_xAnchor; PositionAnchorEnum m_yAnchor; }; [...] ExtVector origin; origin.m_x = 0; origin.m_y = 0; origin.m_xAnchor = TMD_POSITION_ANCHOR_LEFT; origin.m_yAnchor = TMD_POSITION_ANCHOR_TOP;
is saved to:
<origin x="0" y="0" xAnchor="LEFT" yAnchor="TOP" />
TextureMind Framework – Progress #1 – Serialization and log
I continued to program the TextureMind Framework and I'm pretty happy with the result. I wish that this framework will give me the chance to increment the production of my software and to save most of the time (because I don't have it). People told me many times to use already existing frameworks to produce my works, and I tried. Most of them are not suitable for what I want to do, or maybe they have issues with the licenses or simply I don't like them. I want to make something new and innovative, and I feel like I'm about to do it.
- Serialization
Let me say that the serialization is a master piece. You can program directly in C++ new classes with a very easy pattern, save and load all the data into four formats: raw (*.raw), interchangable binary (*.tmd), human readable xml (*.xml) and json (*.json).
TextureMind Framework – Work in progress
Description
TextureMind Framework is written in C++ language and it has been designed for the development of a wide range of cross-platform applications in C++ language. The framework is composed by a set of classes to facilitate multi-threading, serialization, ipc, networking, graphics and computer vision. The framework comes along applications to speed-up the creation of images, animations, GUIs and videogames. It has been coded by me from scratch with the aim of reaching the highest standards after years of professional experience in the field of computer programming.
Super Mario Bros for Amstrad CPC 464
This is an attempt to port the famous Super Mario Bros for NES to Amstrad CPC 464. It has been programmed by me in C89 and Z80 assembly. The current status of the project is blocked but not abandoned. I started this project in 2015 but I never had time to continue it because of my current job and thousand of other projects. Recently, I found the time to cleanup and release the .dsk that you see in the latest youtube video:
If you click the "Download" button, you can get the .dsk with the first playable demo of SMB for Amstrad CPC 464. You can run it with any Amstrad CPC emulator, like WinApe. Do not expect a playble game, this is just a demo but you can still control the player with the same physics of the original title for NES. Enjoy!
Background
When i was a little kid I remember that i really wanted to create a Super Mario Bros game for the amstrad cpc 464. Now that I am 33 and I work as a software engineer I asked myself: why don't you make your old dream come true? :) Finally I found the time to create a demo with the famous first Level 1-1 of Super Mario Bros. The horizontal hardware scrolling needs a double buffer in order to get an accuracy of 4 pixels. The demo runs on original Amstrad CPC 464 speed emulated by Caprice. It is pretty fast and can loop horizontally with a limit of 512 tiles meanwhile the level 1-1 takes only 212 tiles. I readjusted the original smb graphics to fit a 256x192 Mode 1 with 4 colors. I really like the effect of the gray scale map mixed with the blue sky, like in the original NES game. This demo has been programmed with SDCC in C and Z80 assembly.
CJS Framework – Abandoned
The project is marked as abandoned because:
- The name CJS Framework was already used in the context of other frameworks, I don't like it anymore and it doesn't mirror the current purpose of the framework
- I don't have time and resources to achieve the ambitious requirements that I decided in the past (supporting interchangable classes between four languages and make it a standard is too expensive. I cannot open an open source project because it is not convenient for my current job, too much implications)
- The framework evolved into different directions. I changed the name to TM Framework (TextureMind Framework) that is a useful collection of classes to develop my own programs in C++ or C# when the projects will be so complicated to require it. Being a framework slave in never a good thing for the developers, I saw developers becoming incapable of doing the simplest things in C starting from scratch, when they were anchored on Qt, Boost, Unity, Unreal Engine 4 or their own frameworks. I saw also good programmers becoming incapable of doing good plain programming (not even a pacman game) without projecting frameworks that would require years to be finished, so I don't want to feed this trend.
After a long research for the perfect crossplatfrom multitarget language i didn't find anything that satified my expectations. I wanted to create c++, c#, java and javascript projects around a set of basic classes, functionalities and serialization to import/export files in a custom format.
In a first evaluation, Haxe language with json seemed to be the perfect choise, but after using it i realized that the code produced for the languages of my interest is too heavy, bad indented and hard to reuse in the context of a specific project. That's because Haxe and the other multitarget languages are studied to produce a final result (i.e. a game) and not a source code that is easy to understand for the human reader or to include in other part of a wide project. For instance, if i create an Hello world example with haxe and convert it to c#, it will generates alot of useless .cs files in a well structured (and unreadable) project that you can build and run at first attempt, but that is barely readable and hard to reuse into other clean c# projects. If you are thinking of creating a library in haxe to include the source code into your projects, it's better if you change idea: the final result of your cpp, c#, java or javascript projects may end to a gigantic mess.
For this reason, I decided to create from scratch a new framework that is called CJS Framework, which stands for CppJavaScriptSharp Framework. It's mainly a full bridge between these four languages (but it's not improbable that it will support other languages in the future). The basic idea is very simple: make a set of classes that are useful in a project and a serialization system that easly load objects regardless of the used language. For instance, we can decide to take advantage of the .NET framework and produce an editor in C# to manipulate the maps of a game that will be programmed in javascript and it will run on facebook or the web browser. Or a server application in c++ that communicates in a binary format with a client application for html5 coded in javascript. The framework is projected to satify easly all these tasks and much more.
Unrelated Engine – Deferred Rendering and Antialiasing
I tried to implement the explicit multisample antialiasing and I got good results, but it's slow on a GeForce 9600GT. A scene of 110 fps became 45 fps with only four samples, just to point out the slow down. While I was jumping to the ceiling for the amazing image quality of a REAL antialiasing with deferred shading (not the fake crap called FXAA) I fell down to the floor after I seen the fps, what a shame.
Anyway, I decided to change from a deferred shading to a deferred lighting model just to implement a good trick in order to use the classic multisample (that in my card can do pretty well also with 16 samples!) reading from the light accumulation buffer in the final step and writing the geometry to the screen with the antialiasing enabled. The result is a little weird, but you can fix it by using that crap fxaa on the light accumulation buffer which is smoother than the other image components. For example, I can use: a mipmapping or anisotropic filtering to eliminate the texture map aliasing, a FXAA to eliminate the light accumulation buffer aliasing and finally a MSAA to eliminate the geometry aliasing.
ps: I used the nanosuit model from this site: www.gfx-3d-model.com/2009/09/nanosuit-3d-model/
TSREditor – Last screenshots
TSREditor is a huge editor in the style of Blender3D projected to create or edit the resources like textures, 3d models, sound and levels for games and other stuff.
In spite of the large amount of work needed to reach a decent version of this software, I decided that it will be free as a part of my Unrelated Framework. In this moment I'm far from a decent beta version to release, but I can show you two nice screenshots of the program at work.
Unrelated Engine – Work in progress
This is the first "work in progress" video of my 3d engine called Unrelated Engine. Some complex animated models come from Doom 3. They were converted to a maximum of 4 weights per vertex as well as the identical vertices have been cancelled to improve the speed. The shader language was used to obtain a large amount of skinned meshes and complex materials with a reasonable speed.
The 3d models are from doom3 and from http://www.models-resource.com/, they were used only to test my engine and to make this video. The rights of these 3d models and the music are reserved by their respective authors.
Unrelated Engine – A nice test with Mario 64 map
I created a nice video with an engine that I'm still developing and that is part of my Unrelated Framework. It uses:
- OpenGL (to draw the graphics)
- DevIL (to import images)
- Assimp (to import 3d models)
The 3d models are from http://www.models-resource.com/, they were used only to test my engine and to make this video. The rights of these 3d models and the music are reserved by their respective authors.
Unrelated Framework – Inclusion of OpenIL/DevIL
OpenIL is a library with very powerful image loading capabilities and I decided to include it in my framework. My standard can handle several image formats like the classic 8 bit RGB or more advanced formats like 16 bit RGB or High Dynamic Range. The images are imported from file and used maintaining the original format (if it's possible). For other feature check OpenIL official web site (http://openil.sourceforge.net/)
Supports loading of:
* Windows Bitmap - .bmp |
Supports saving of:
* Windows Bitmap - .bmp |
This is an example of how it works in my framework:
C_Image *newImage = new C_Image();
//load from file
newImage->ImportFormFile("test.jpg");
newImage->ImportFormFile(UF_FILE_JPG, "test.jpg");
//export to file
newImage->ExportToFile(UF_FILE_HDR, "test.hdr");
//set file format jpeg compression
newImage->SetFileFormat(UF_FileFormat_JPG(50)); //the image will be saved in my format with a jpeg quality of 50
newImage->SaveToFile("test.img");
Gianpaolo Ingegneri
Copyright @ 2010 – All right reserved
Unrelated Framework – Inclusion of ZLib
The Unrelated Framework got support for ziv-lempel compression of data using ZLib. The classes can be compressed in memory, loaded and saved with few rows of code and without limits. For example:
//this works only with resources like images, fonts, sounds, etc...
C_Image *newImage = new C_Image();
newImage->ImportFromFile("test.tga"); //load image from file
newImage->SetFileFormat(UF_FileFormat_Zip(6)); //set a zip file format of 6th level
newImage->SaveToFile("test.img"); //save the zipped resource on file, simple isn't it?
//if you want to load...
newImage->LoadFromFile("test.img"); //the system understands that it was zipped
newImage->SetFileFormat(NULL); //set it NULL if you don't want the zip compression in the future
or
//this can be used for every kind of object
C_Image *newImage= new C_Image();
newImage->ImportFromFile("test.tga"); //load image from file
C_Object_Zip *objZip = new C_Object_Zip(); //init a zip container
objZip->CompressObject(newImage, 6); //compress the image object
objZip->SaveToFile("test.zob"); //save the zipped object on file
delete newImage;
//if you want to load...
objZip->LoadFromFile("test.zob"); //load the zipped object from file
newImage = (C_Image *)objZip->UncompressObject(); //get the uncompressed object
delete objZip; //delete and free the zip object memory
Gianpaolo Ingegneri
Copyright @ 2010 – All right reserved
Unrelated Framework – Gui and Gui Editor
Finally I completed the GUI of my framework. The best feature is that the Gui can work in two modes: via software or using the OpenGL. It's very helpful for cross platform compatibility, for video games or other OpenGL purpose.
The Gui is in his first version but it has all the widgets necessary to create professional applications. An interesting feature is that you don't need to program a single row of code to create particular interfaces: with the Gui Editor you can easily project all kind of professional interfaces and load them in your program using few functions. You don't need to code the widgets to make them work properly. In this way you can save hours of programming.
There is a full list of widgets implemented:
- Button
- Radio button
- CheckBox
- Form
- Frame Window
- FrameBox
- PictureBox
- ScrollBar
- Scroll space
- TextBox
- ComboBox
- Menu
- ListView
- TreeView
- ToolBar
- Image button
- Graphic Api Viewer
Of course the GUI was coded in C++, it's object oriented and it's an integrative part of my Unrelated Framework.
Gianpaolo Ingegneri
Copyright @ 2010 – All right reserved
Unrelated Engine – Aquarium 3D
Aquarium 3D is a little demo of an engine that I'm developing for my framework. It uses a multithreading system with a thread for the physic engine and a thread that draws the graphics on the screen: the two threads are perfectly synchronized to maintain the best fluidity possible with different framerates.
The physic engine has a static number of iterations per second, in this case 30. It can obtain a good fluidity of movements also on higher fps of the graphics card (like 75 for example) upscaling the static framerate with a series of trajectory corrections. It uses also OpenGL for graphics,GLUT to open the window and lib3ds to import the 3d studio meshes. The fish models are property of this site: http://toucan.web.infoseek.co.jp/3DCG/3ds/FishModelsE.html
Gianpaolo Ingegneri
Copyright @ 2010 – All right reserved
Ultra Fast Interpolation (via software)
In general, interpolation is a method used to construct a range of values from a set of data points. In digital image computing there are several methods of interpolation to improve the aspect of a transformed image but there is a problem: all of them are too slow to work via software in real time. Infact, we can see fast interpolations in 3d games only because they are performed via hardware by the graphic card (infact in the past it was very difficult to see an interpolation performed in real time). However, interpolation is usefull not only for 3d engines but they are an important part of digital image computing so there is a real need to develop a method to make it faster, expecially if you have to work with a large amount of images at the same time. For this reason I have programmed from scratch a set of optimized algorithms of interpolation that are definite as in the past... but 100 times faster! The only limitation is that they can be used only for scale trasform but numerically they are perfect and faster at the same time.
Moreover, they are useful to generate procedural images in real time, like textures, that can be used in 3d engine or in some paint softwares where you cannot have a 3d card to speed up all the stuff. In this demo you can see the high performaces of different algorithms with images of 16 bit per pixel. To make sure that my interpolation is numerically perfect, I have included also the calculus of the normal map and the bump mapping effect. With biquadratic interpolation you will obtain a still perfect scaled bump map because the normal map calculus is derivative and the biquadratic is a second order reconstruction filter.
There are the performances on my computer:
Zoom 1x (on a 512x512 image)
- Nearest: 400 fps - Bilinear: 270 fps - Biquadratic: 80 fps
Zoom 4x (on a 512x512 image)
- Nearest: 965 fps - Bilinear: 780 fps - Biquadratic: 245 fps
Zoom 16x (on a 512x512 image)
- Nearest: 1405 fps - Bilinear: 1165 fps - Biquadratic: 390 fps
As you can see, the speed of the algorithm is directly proportional to the size of the zoom, however it is very fast also at the minimum size of 1x. This condition is very useful if you have to resize images or to generate textures with a large amount of stretched layers, like perlin noise. CPU: Intel 2 QuadCore 2333 Mhz; RAM: 4 GB DDRII 800Mhz
Gianpaolo Ingegneri
Copyright @ 2010 – All right reserved
Old version of Gloom Space (abandoned)
Gianpaolo Ingegneri
Copyright @ 2010 – All right reserved
An old Mario64 clone (abandoned)
Years ago I coded a 3d engine to create my own clone of super mario 64. The name of the project was Chronicle Time Quest and it never reached its final release or a version that was remotely playable. Unfortunately it was abandoned years ago for lack of time, resources and interests. However, it was usefull as coding experience and the result wasn't so bad at all, so I have decided to publish this video to show something about it.
Probably this project will be restored as soon as possible, with a new concept (probably similar to the mario galaxy one) and a totally new 3d engine. If you like this idea or the video or the concept or whatever you want, please, write a comment to let me know what you think about it. Thanks.
Gianpaolo Ingegneri
Copyright @ 2010 – All right reserved
Unrelated Framework – Abandoned
The project started from a small framework that I coded for the Amiga 1200. Now the project is much more advanced and it has been developed for many years on pc/windows platforms. In the past it was called Ultimate Framework but there were already several frameworks with the same name and I decided to rename it to Unrelated Framework .
Some features:
- Programmed entirely in C++
- Serialization of Classes
- Cross-platform design
- New image format for digital image processing
- New surface format for computer graphics
- New format for multi channel textures (color, alpha, bump, normals, z-buffer …)
- Algorithms studied to work also via software
- Proprietary format for bitmap fonts
- Powerful procedural generator of textures, static and animated
- Wrapper of OpenGL, OpenCV, OpenEXR, FreeType, etc…
- Flexible GUI studied to work via software and via hardware
This framework is still work in progress and I used it to produce many of the software that you can see in this site.
Gianpaolo Ingegneri
Copyright @ 2011 - All right reserved
Texture Generation V1.0
This time it was a little bit harder. As you can see in this video, my engine can generate in real time very complex textures with the maximum detail at the maximum speed possible.
Each texture is generated in real time at the frame rate showed on the top-left corner of the window (the "generation" label).
- Lumps (512x512, 200 objs, 16 bpp, 345 FPS)
- Blobs (512x512, 16 bpp, 107 FPS)
- Perlin noise (512x512, 8 octaves, 16 bpp, 115 FPS)
Also, the normal map and the bump mapping are calculated in real time. I used 16 bit per pixel to have more precision when I get the normal map (for more informations check my High Static Range video). Precision and speed of this engine are awesome. The final result can loop in all horizontal and vertical directions.
CPU: Intel 2 QuadCore 2333 Mhz; RAM: 4 GB DDRII 800Mhz
©2009 Gianpaolo Ingegneri
High Static Range
This demo shows a new feature about the texture generation engine implemented in my Unrelated Framework. The normalmap used by bump mapping is calculated on the base of an heightmap generated with two different methods: Low Static Range and High Static Range. The enviroment bump mapping shows the difference in quality between the two ranges.
The first range uses only 8 bits per pixel that is too low to represent a continuous surface like the sphere generated in this example and as result we can see many rings on the surface during the light effect. On the contrary, an high range of 16 bits per pixel is perfect to have a light effect on a continuous surface infact we don't have any kind of imperfection. I could use the famous High Dynamic Range to generate the height map, but in my opinion it could be too expensive for the use of memory and the lack of speed (32 bits floating point per pixel, or 16 bits half-float has continuous conversions because cpu doesn't support it).
©2009 Gianpaolo Ingegneri
Picture Tube
This is a demo about drawing loop textures with a picture tube technique. The blit functions of my engine can write an alpha source image to a destination buffer preserving the alpha channel information. This technique is useful to design alpha textures and to make some background effect as shown in the picture.
©2008 Gianpaolo Ingegneri
Blit Tech (via software engine)
Finally I've released what I hope will be the first in a long series of demonstrations about the potentiality of my Unrelated Framework. This demo shows some graphic effects to demonstrate the enormous flexibility of the blit engine. Strictly coded via software, it can run on lowend configurations with an excellent frame rate.
There is a custom font format very useful for future development on platforms where there is no freetype support or to resolve the annoying problem that you cannot use the true type hinting informations without a specific license. Moreover, my engine includes other effects like alpha blending, complex bitmap fonts and bump mapping. There is no use of graphics card because all the drawing algorithms have been reprogrammed from scratch by myself and the image buffer is displayed in fullscreen using the basic gdi functions of the operating system. However, in the architecture of my framework you can overload all the via software drawing functions with the graphic card functions of the other famous api like opengl or direct3d.
©2008 Gianpaolo Ingegneri
Textures (06)
Textures (05)
Textures (04)
Textures (03)
Textures (02)
Here we have another nice collection of textures (always of my own creation) that you can safely use in your web production, software, and so on. If you intend to use my work you could write me an email and let me know or replicate this same post or quote me in what you have created. To access the textures (like all other content on this site), you must click on "continue reading" to see the full post. Thanks for your attention.
TexAviTure v0.05 Beta
For the delight of many graphicians and web masters I've released my old (and unpublished) utility for generating procedural animated textures which can loop in all directions and also in a time period without causing side effects or shots. The program is in a very early beta version and it can support only perlin noise or cellular to create the main effects in great demand especially for 3D games (such as lava, flame or candle). It is able to save the created animation as individual frames in bitmap format or as an entire video in avi format. (download)
©2008 Gianpaolo Ingegneri
Super Ball Smasher
Rilasciato finalmente in versione completa questo clone di Pang che ho programmato qualche tempo fa. Potrete usufruire di 50 stages distribuiti in 5 livelli di gioco con ambientazioni a tema e musiche suggestive create appositamente per immergere il giocatore nel clima del livello senza degenerare nel fastidio della ripetitività.
Il concept, la grafica e la musica sono totalmente originali e studiati per offrire una interessante variazione al solito e monotono schema dei cloni di pang, non rischiando di appesantire il gioco ma rendendolo al contrario più divertente e longevo. Il boss di fine gioco cerca oltremodo di ripagare le aspettative del giocatore, proiettandolo in una sfida finale non eccessivamente complessa, ma neanche troppo banale, con lo scopo di concretizzare in una vittoria finale il lungo cammino affrontato attraverso i 5 livelli di gioco. L'intero progetto è stato creato in poco tempo e usando software opensource. L'unica versione disponibile è momentaneamente per pc/windows, che potete scaricare cliccando direttamente qui (download). Buon divertimento.
Gianpaolo Ingegneri.
Stereograms (01)
You can see these images in 3D using the crossed eye technique. I suggest you to try first the image with the castle tha is the best one in particular if you are trying to learn the right way to focus the depth information with your eyes.
Textures (01)
I added some textures made by me some years ago. To create them I have used my old software for creating procedural textures that was called Virtual Surface. I never completed it in the past for lack of time and interest. Perhaps this project will be reborn in the future with a totally new concept and objectives. In the meantime enjoy this collection of textures.
Nuovo sito
Salve a tutti. Benvenuti nella mia nuova pagina personale. Mi sto trasferendo definitivamente dal mio vecchio sito TSRevolution (www.tsrevolution.com), dato che faceva parte di un progetto del passato che non ho più intenzione di continuare. Ho pensato di impostare questo nuovo sito come un blog, tuttavia non sarà come quelle pagine che si usano adesso dove le persone scrivono un diario delle loro faccende personali, ma cercherò di mantenermi sempre sul tema della programmazione di computer, rilasciando sorgenti, progetti e articoli su alcuni argomenti di computer grafica. Sto cercando qualche sito (attinente all'argomento) da linkare, quindi se siete programmatori, avete una pagina e siete interessati potete contattarmi all'indirizzo email gingegneri82@hotmail.it, oppure lasciare un commento a questo post. Uno speciale ringraziamento è rivolto agli autori del WordPress e all'artista che ha progettato l'aspetto grafico di questa pagina.
Buona navigazione!