As requested by @idrassi, a system should be implemented, that collects VC crash reports and makes them available for usage by the developer team.
The requirements are loosely defined as:
"What is needed is a webpage (in PHP, for example) that would gather this information and store it in a database in a usable format. An admin interface should allow exploration of the entries in the database and the ability to export them.
Additionally, the webpage should include protection against spam and DOS attacks."
"Concerning the crash reporting mechanism, it collects the following information:
Program version: The specific version of VeraCrypt that encountered the issue.
Operating system version: The version of the OS on which the crash occurred.
Hardware architecture: Information about the CPU architecture (e.g., x86_64, ARM).
Checksum of the VeraCrypt executable: A checksum that helps verify the integrity of the executable.
Error category: The signal number indicating the type of error.
Error address: The memory address where the fault occurred.
Call stack: The sequence of function calls leading up to the error.
It's important to note that no personal information is included in the crash reports. The call stack captured is purely technical and does not contain any user data.
That being said, the server will naturally receive the user's IP address as part of the HTTP request. However, this IP address should not be stored in the database to protect user privacy. At the same time, implementing rate limiting or other mechanisms based on IP addresses would be a necessary step to protect against potential DOS attacks or spam submissions.
At this stage, I think the primary focus should be on:
Database Design: structure and format of data, without retaining IP addresses or any other personal information.
Web Application Security: Implementing measures like rate limiting or CAPTCHAs to protect against abuse.
"
Last edit: Gaetano Giunta 2024-09-02
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
authentication of the people who will be able to access the data. I imagine that the data will not be made available to the general public, but a team of developers will have access, with a subset being the maintainers. How do we manage authentication/authorization?
interface for searching over the collected data: I'd think that a simple web page allowing to filter over each field, possibly using wildcards would be enough. Or should we start out with a more sophisticated search engine, a la solr/elastic?
user-experience for the end users: do they get a pop-up asking if they want to submit the data, or is the process supposed to be fully automatic? The pop-up would allow for picking a captcha...
does it make sense at all to use public-key crypto to make sure that only legit veracrypt binaries can be used to submit data (note that I never investigated safe implementations of this feature - please ignore if this question makes no sense)
git repo: I presume this could live in its own - or should it be within the main VC repo?
As for sketching out the implementation, my personal preference is for:
- implement in PHP, using Symfony components: stable tech, with a foreseeable long support window, well known and easy to pick up. 2nd choice: Django
- postgresql db for storage (alternative: elastic or solr)
- a docker + docker-compose environment to ease deployment of test/staging environments - not really necessary, as php/nginx/pgsql are easy to deploy standalone, but it is nice to have a fully automated way to set up the environment)
- debian or ubuntu base os for the container images
The database design so far would be 1 table, with possibly reference tables for those values which can only have values within a predefined set. All of the values you mentioned above seem to be "loose" enough to require a varchar field, with possibly the call-stack data requiring a text field. I'd add a timestamp column, and a checksum column, storing a hash of the data, which could be used to help alleviate the issue of dupes.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Another question: is it worth investing time in the investigation of existing projects/libraries which might implement this, or is it desirable to keep the external code to a minimum, and have it fully built in-house (eg. for auditability purposes)?
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Hosting: VeraCrypt website is hosted on a dedicated server, and the crash reporting will be hosted on the same server. This ensures easy management and integration with existing infrastructure.
Authentication/Authorization: There should be a mechanism to define the administrator credentials during the first installation. Afterward, the administrator can add extra accounts. A simple authentication mechanism based on MySQL or SQLite, stored securely on the server, is sufficient. Passwords should be hashed and salted for security.
Search Interface: A simple search interface is sufficient, with basic filtering options.
Protection Against Spam/Bots: To protect against spam and bots, we should implement a submission confirmation page along with protective mechanisms like CAPTCHA and rate limiting. While IP addresses won't be stored to maintain privacy, IP-based throttling can still be used to prevent abuse.
Verification of Crash Reports: Since VeraCrypt is open source, anyone can build their own binary, so we cannot ensure that crash reports come exclusively from official VeraCrypt binaries.
Repository: I will create a specific repository dedicated to this web app since it is not directly linked to the main VeraCrypt software.
Implementation Preferences:
Avoiding Docker: I prefer to avoid using Docker for this project to keep the deployment environment straightforward and minimize complexity.
Database: Given the expected load, SQLite should be sufficient for our needs, but MySQL is already present on the server and can be used if necessary. I prefer SQLite for its simplicity and ease of use.
Framework: I'm inclined towards a minimal implementation with minimal dependencies to reduce the attack surface and make the system easier to review and secure. A micro-framework like Slim for PHP could be a good fit, balancing simplicity with functionality.
External Code: I prefer to keep external code to a minimum, favoring a simple and self-contained implementation. However, if there are lightweight libraries or tools that can enhance specific functionalities without adding unnecessary complexity, I'm open to considering them.
Idea: Using checksums to detect duplicate submissions will help detect similar crash reports.
Last edit: Mounir IDRASSI 2024-09-03
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
To be honest, I'd rather go either with the Symfony Microkernel Trait, or, maybe better, with plain old everything-from-scratch php.
It's not that I have a grudge with Slim, but, given the requirements so far, I see little value in using micro-frameworks, as routing will be extremely simple, dependency injection too, and configuration management reduced to wrapping access to a few env vars. As for logging, a simple class implementing psr/log can do.
I'd go for PDO for accessing the DB, trying to keep the SQL as portable as possible.
Also, I prefer the stability/maintenance pledges of the Symfony team to the ones from Slim.
The next questions would be:
- is it worth using Twig for rendering output? It does come with very lightweight dependencies, and I loathe writing my own safe-rendering routines - they always end up in half-assed, full-fledged template engines
- is it worth implementing rate/ip limiting from scratch? that would most likely be the most complex bit of code, and using an existing lib could save quite some development time
- should I wait for the git repo to be set up? I can start working and publish in a github repo of my own if you'd like to only grant me access at a later stage...
Last edit: Gaetano Giunta 2024-09-07
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I will go with your choices, as you have more practical experience on this subject than I do.
Regarding protection against spam/bots, using existing libraries that are battle-tested makes more sense than reinventing the wheel and ending up with an inefficient solution.
You can fork it and create Pull Requests for the various stages of development. Pull Requests are useful for review and discussion about the changes.
There's no need to wait for the entire development to be finished to create a Pull Request. You can proceed step by step, laying the foundations first and then adding features gradually. This will also be helpful for code review.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Hello hello.
I started looking at the implementation of the rate-limiting logic.
Even if we can hope that irl most users will not hit VC crashes frequently, the goal of the rate-limiter is to sustain (and prevent) massive concurrent accesses.
In order to keep track of the number-of-requests-per-time-window from a given "user" (which we'd identify by client IP, I presume), a data store which supports high-concurrency updates is necessary. As far as I am aware, sqlite is not quite designed for that - it does not support "select for update", and in its default configuration writes block reads - meaning the whole db is locked! While we could enable sqlite "WAL mode" to make sqlite more concurrency-friendly, it still seems a bad choice for concurrent updates -without even taking into account the need to figure out in detail how php-fpm deals with the connection to the db (is it one connection per process or is there some sort of connection pooling? how often would checkpoints happen? etc).
I'd suggest to go for using Redis instead.
As a bonus feature, having Redis available makes it also a breeze to use it as store for session data instead of the filesystem, which imho is also a good idea (again, for reasons of concurrency).
WDYT?
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
The rate-limiting code is now ready, based on usage of Redis, in a dedicated branch in my own fork. I have written some details in the comments of the currently open PR.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
@idrassi side question: do you have already free licenses from Jetbrains for VeraCrypt? If not, I'm happy to request licenses for CLion and PHPStorm / the complete toolbox (the request form is at https://www.jetbrains.com/shop/eform/opensource )
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
@ggiunta: No, I have never used JetBrains, as I always use VS Code and Visual Studio. I wasn’t aware that they offer free licenses for open-source projects. I can give it a try and see what it offers in terms of productivity. Is filling out the form all that’s needed?
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Yes, filling out the form is all that is needed. You might have to specify the names of all the collaborators you are asking licenses for, and mention the fact that there are more than one github repo involved.
Getting the licenses is not a given though - they used to be much more liberal about it, but have tightened the rules over time. Human examination of the submission is involved.
Licenses have to be renew every year (maybe it is 6 months now?) by basically sending an email, and they might not be renewed, eg. if the project has not seen recent activity. The nice thing though is that an expired license turns into a perpetual license (to use a previous version of the software).
As for productivity: I don't know how CLion compares to VSCode.
For PHP, I find PHPStorm much better than VSCode, but of course there's lot of acquired taste in that. I'd say that it has definitely better code intelligence and more depth in ecosystem integration, such as for dealing with composer files, twig templates, symfony configuration files etc. It needs fewer plugins than VSCode to achieve an equivalent level of automation, and there are fewer plugins available for doing the same thing, making their choice easier.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Hello hello. I messed up a bit with GitHub, and the currently open PR has been updated with the tip of my development branch. i.e. it has all functionality atm.
Apart from adding tests, this is a list of possible improvements and new features which I'd like to discuss:
1. adding a pepper in the password hasher config - it is recommended by owasp best practices
2. also: set min. length for passwords / maybe other rules?
3. also: should we make user email unique? (it helps with the forgotpassword logic)
4. add a 'hash' col to report data, to allow counting dupes: which fields should we include?
5. formatting of the 'call stack' data: do you have any example call stack for me to check?
6. there is extra data which could be gathered and be useful, while still remaining anonymous -> see f.e. the tdf implementation of a report, at https://crashreport.libreoffice.org/stats/crash_details/c1adca44-94eb-441b-99a9-9e8c3a676193
7. should we add "file a bug here" link in the report confirm page?
8. adding stats reports with counts per os/version/month, etc... Imho useful. What to group by? See fe. https://crashreport.libreoffice.org/stats/
9. what about storing tokens in Redis instead of the DB? It would have the advantage of not needing to set up a cron job for token-removal
10. what about adding declare(strict_types=1); to every php file?
11. what about using a "proper" http resp. code 429 when rate-limiting is hit?
12. usage of autoincrementing synthetic PK vs. a natural key (username) for the users table: is it ok to keep the current id?
13. should we add STRICT mode for sqlite tables? see https://www.sqlite.org/stricttables.html
(needs sqlite >= 3.37.0; in ubuntu 22.04 we have 3.37.2. Atm we require 3.35.4)
14. should we add cli commands that run desirable SQLite pragmas?
15. should we add remember-me support for users too lazy to type their password?
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Honestly, this goes beyond what I expected for this feature and it is evolving into a dedicated project for reusable crash reporting. You are doing an amazing job!
Regarding your questions, I cannot provide knowledgeable insights on all of them but I’ll share my opinion on the approach we should follow:
Passwords for dashboard users should be handled according to best security practices (e.g., using pepper and a configurable policy).
Emails should be unique.
Statistics and duplicate handling are important. For duplicates, the call stack is the only thing that matters.
We can add extra fields later. For now, let’s stick with what was specified.
No "remember me" feature.
Regarding an example of a call stack, the URL format would look like this:
The call stack is decomposed into elements (e.g., stXX) that are appended to the end of the URL. In the example above, the stack will be extracted after removing the & operators:
Honestly, this goes beyond what I expected for this feature and it is evolving into a dedicated project for reusable crash reporting.
Indeed, I am also surprised at how much code had to be written to achieve the required functionality :-D On one hand, I might have avoided putting in all the extension points and layers and flexibility; on the other hand, I had more free time than planned, and indeed it would be nice if this ended up being used by other projects too!
Given the call stack example, I will have to refactor a bit the controller handling the upload of the crash info to accommodate it.
Just to be sure we are on the same page: the controller handling uploads atm is only triggered by POST requests, and it answers back with a redirect - the url which should be opened in the browser. If that is correct, I guess that the current crashHandler code will need some tweaks as well...
Ok for the other remarks.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
@idrassi I have created a new branch which changes the interaction between VC and the CC:
- GET is used by default instead of POST
- if there are errors in the data, an html page is shown instead of a plaintext one
- the names of the query string args are the same as in the existing VC code
- the php code takes care of decoding the custom format used by VC to send call stack data
(the original thread where this started can be found at https://sourceforge.net/p/veracrypt/discussion/general/thread/3903ea9e97/)
As requested by @idrassi, a system should be implemented, that collects VC crash reports and makes them available for usage by the developer team.
The requirements are loosely defined as:
"What is needed is a webpage (in PHP, for example) that would gather this information and store it in a database in a usable format. An admin interface should allow exploration of the entries in the database and the ability to export them.
Additionally, the webpage should include protection against spam and DOS attacks."
"Concerning the crash reporting mechanism, it collects the following information:
It's important to note that no personal information is included in the crash reports. The call stack captured is purely technical and does not contain any user data.
That being said, the server will naturally receive the user's IP address as part of the HTTP request. However, this IP address should not be stored in the database to protect user privacy. At the same time, implementing rate limiting or other mechanisms based on IP addresses would be a necessary step to protect against potential DOS attacks or spam submissions.
At this stage, I think the primary focus should be on:
"
Last edit: Gaetano Giunta 2024-09-02
@idrassi first round of questions:
As for sketching out the implementation, my personal preference is for:
- implement in PHP, using Symfony components: stable tech, with a foreseeable long support window, well known and easy to pick up. 2nd choice: Django
- postgresql db for storage (alternative: elastic or solr)
- a docker + docker-compose environment to ease deployment of test/staging environments - not really necessary, as php/nginx/pgsql are easy to deploy standalone, but it is nice to have a fully automated way to set up the environment)
- debian or ubuntu base os for the container images
The database design so far would be 1 table, with possibly reference tables for those values which can only have values within a predefined set. All of the values you mentioned above seem to be "loose" enough to require a varchar field, with possibly the call-stack data requiring a text field. I'd add a timestamp column, and a checksum column, storing a hash of the data, which could be used to help alleviate the issue of dupes.
Another question: is it worth investing time in the investigation of existing projects/libraries which might implement this, or is it desirable to keep the external code to a minimum, and have it fully built in-house (eg. for auditability purposes)?
@ggiunta: Here are my answers:
Hosting: VeraCrypt website is hosted on a dedicated server, and the crash reporting will be hosted on the same server. This ensures easy management and integration with existing infrastructure.
Authentication/Authorization: There should be a mechanism to define the administrator credentials during the first installation. Afterward, the administrator can add extra accounts. A simple authentication mechanism based on MySQL or SQLite, stored securely on the server, is sufficient. Passwords should be hashed and salted for security.
Search Interface: A simple search interface is sufficient, with basic filtering options.
Protection Against Spam/Bots: To protect against spam and bots, we should implement a submission confirmation page along with protective mechanisms like CAPTCHA and rate limiting. While IP addresses won't be stored to maintain privacy, IP-based throttling can still be used to prevent abuse.
Verification of Crash Reports: Since VeraCrypt is open source, anyone can build their own binary, so we cannot ensure that crash reports come exclusively from official VeraCrypt binaries.
Repository: I will create a specific repository dedicated to this web app since it is not directly linked to the main VeraCrypt software.
Implementation Preferences:
External Code: I prefer to keep external code to a minimum, favoring a simple and self-contained implementation. However, if there are lightweight libraries or tools that can enhance specific functionalities without adding unnecessary complexity, I'm open to considering them.
Idea: Using checksums to detect duplicate submissions will help detect similar crash reports.
Last edit: Mounir IDRASSI 2024-09-03
I recommend using the cloudflare.com client certificate feature, which can block malicious commits based on the certificate.
To be honest, I'd rather go either with the Symfony Microkernel Trait, or, maybe better, with plain old everything-from-scratch php.
It's not that I have a grudge with Slim, but, given the requirements so far, I see little value in using micro-frameworks, as routing will be extremely simple, dependency injection too, and configuration management reduced to wrapping access to a few env vars. As for logging, a simple class implementing psr/log can do.
I'd go for PDO for accessing the DB, trying to keep the SQL as portable as possible.
Also, I prefer the stability/maintenance pledges of the Symfony team to the ones from Slim.
The next questions would be:
- is it worth using Twig for rendering output? It does come with very lightweight dependencies, and I loathe writing my own safe-rendering routines - they always end up in half-assed, full-fledged template engines
- is it worth implementing rate/ip limiting from scratch? that would most likely be the most complex bit of code, and using an existing lib could save quite some development time
- should I wait for the git repo to be set up? I can start working and publish in a github repo of my own if you'd like to only grant me access at a later stage...
Last edit: Gaetano Giunta 2024-09-07
@ggiunta: Thank you for your feedback.
I will go with your choices, as you have more practical experience on this subject than I do.
Regarding protection against spam/bots, using existing libraries that are battle-tested makes more sense than reinventing the wheel and ending up with an inefficient solution.
I have created a Git repository for future development:
https://github.com/veracrypt/VeraCrypt-CrashCollector
You can fork it and create Pull Requests for the various stages of development. Pull Requests are useful for review and discussion about the changes.
There's no need to wait for the entire development to be finished to create a Pull Request. You can proceed step by step, laying the foundations first and then adding features gradually. This will also be helpful for code review.
Forked.
I see a link to the contributing guidelines, but there's no such doc yet. Do you have it available somewhere?
@ggiunta: I have added the missing CONTRIBUTING.md. It is fairly generic, similar to those used in many other projects.
Hello. I'm not dead! Just been a bit busier than expected. Here's the 1st commit - https://github.com/gggeek/VeraCrypt-CrashCollector/tree/gg/devel. A PR is likely within 1-2 weeks max
Hello hello.
I started looking at the implementation of the rate-limiting logic.
Even if we can hope that irl most users will not hit VC crashes frequently, the goal of the rate-limiter is to sustain (and prevent) massive concurrent accesses.
In order to keep track of the number-of-requests-per-time-window from a given "user" (which we'd identify by client IP, I presume), a data store which supports high-concurrency updates is necessary. As far as I am aware, sqlite is not quite designed for that - it does not support "select for update", and in its default configuration writes block reads - meaning the whole db is locked! While we could enable sqlite "WAL mode" to make sqlite more concurrency-friendly, it still seems a bad choice for concurrent updates -without even taking into account the need to figure out in detail how php-fpm deals with the connection to the db (is it one connection per process or is there some sort of connection pooling? how often would checkpoints happen? etc).
I'd suggest to go for using Redis instead.
As a bonus feature, having Redis available makes it also a breeze to use it as store for session data instead of the filesystem, which imho is also a good idea (again, for reasons of concurrency).
WDYT?
The rate-limiting code is now ready, based on usage of Redis, in a dedicated branch in my own fork. I have written some details in the comments of the currently open PR.
@idrassi side question: do you have already free licenses from Jetbrains for VeraCrypt? If not, I'm happy to request licenses for CLion and PHPStorm / the complete toolbox (the request form is at https://www.jetbrains.com/shop/eform/opensource )
@ggiunta: No, I have never used JetBrains, as I always use VS Code and Visual Studio. I wasn’t aware that they offer free licenses for open-source projects. I can give it a try and see what it offers in terms of productivity. Is filling out the form all that’s needed?
Yes, filling out the form is all that is needed. You might have to specify the names of all the collaborators you are asking licenses for, and mention the fact that there are more than one github repo involved.
Getting the licenses is not a given though - they used to be much more liberal about it, but have tightened the rules over time. Human examination of the submission is involved.
Licenses have to be renew every year (maybe it is 6 months now?) by basically sending an email, and they might not be renewed, eg. if the project has not seen recent activity. The nice thing though is that an expired license turns into a perpetual license (to use a previous version of the software).
As for productivity: I don't know how CLion compares to VSCode.
For PHP, I find PHPStorm much better than VSCode, but of course there's lot of acquired taste in that. I'd say that it has definitely better code intelligence and more depth in ecosystem integration, such as for dealing with composer files, twig templates, symfony configuration files etc. It needs fewer plugins than VSCode to achieve an equivalent level of automation, and there are fewer plugins available for doing the same thing, making their choice easier.
Hello hello. I messed up a bit with GitHub, and the currently open PR has been updated with the tip of my development branch. i.e. it has all functionality atm.
Apart from adding tests, this is a list of possible improvements and new features which I'd like to discuss:
1. adding a pepper in the password hasher config - it is recommended by owasp best practices
2. also: set min. length for passwords / maybe other rules?
3. also: should we make user email unique? (it helps with the forgotpassword logic)
4. add a 'hash' col to report data, to allow counting dupes: which fields should we include?
5. formatting of the 'call stack' data: do you have any example call stack for me to check?
6. there is extra data which could be gathered and be useful, while still remaining anonymous -> see f.e. the tdf implementation of a report, at https://crashreport.libreoffice.org/stats/crash_details/c1adca44-94eb-441b-99a9-9e8c3a676193
7. should we add "file a bug here" link in the report confirm page?
8. adding stats reports with counts per os/version/month, etc... Imho useful. What to group by? See fe. https://crashreport.libreoffice.org/stats/
9. what about storing tokens in Redis instead of the DB? It would have the advantage of not needing to set up a cron job for token-removal
10. what about adding
declare(strict_types=1);
to every php file?11. what about using a "proper" http resp. code 429 when rate-limiting is hit?
12. usage of autoincrementing synthetic PK vs. a natural key (username) for the users table: is it ok to keep the current
id
?13. should we add STRICT mode for sqlite tables? see https://www.sqlite.org/stricttables.html
(needs sqlite >= 3.37.0; in ubuntu 22.04 we have 3.37.2. Atm we require 3.35.4)
14. should we add cli commands that run desirable SQLite pragmas?
15. should we add remember-me support for users too lazy to type their password?
Thank you for this comprehensive list.
Honestly, this goes beyond what I expected for this feature and it is evolving into a dedicated project for reusable crash reporting. You are doing an amazing job!
Regarding your questions, I cannot provide knowledgeable insights on all of them but I’ll share my opinion on the approach we should follow:
Regarding an example of a call stack, the URL format would look like this:
https://crashreport.veracrypt.fr\cpus=2&cksum=28456515&err=11&addr=5815be86be9b&st0=0x5815be80f07b&st1=0x715718242520&st2=0x5815be86be9b&st3=wxEvtHandler_ProcessEventIfMatchesId&st4=wxEvtHandler_SearchDynamicEventTable&st5=wxEvtHandler_TryHereOnly&st6=wxEvtHandler_ProcessEventLocally&st7=wxEvtHandler_ProcessEvent&st8=wxEvtHandler_SafelyProcessEvent&st9=0x7157190d8027&st10=g_signal_emit_valist&st11=g_signal_emit&st12=0x71571793dab0&st13=g_signal_emit_valist&st14=g_signal_emit&st15=0x71571793d884&st16=0x715717bf0bf5&st17=g_signal_emit_valist&st18=g_signal_emit&st19=0x715717a07ffc&st20=g_cclosure_marshal_VOID_BOXEDv&st21=g_signal_emit_valist&st22=g_signal_emit&st23=0x7157179ffacb&st24=0x715717a0783b&st25=0x715717a08443&st26=gtk_event_controller_handle_event&st27=0x715717ba0055&st28=0x715717be5b87&st29=g_closure_invoke&st30=0x715718c78624&st31=g_signal_emit_valist
The call stack is decomposed into elements (e.g.,
stXX
) that are appended to the end of the URL. In the example above, the stack will be extracted after removing the&
operators:The call stack is formatted this way in the
FatalErrorHandler::GetCallStack
method, located in the filesrc/Main/FatalErrorHandler.cpp
.Indeed, I am also surprised at how much code had to be written to achieve the required functionality :-D On one hand, I might have avoided putting in all the extension points and layers and flexibility; on the other hand, I had more free time than planned, and indeed it would be nice if this ended up being used by other projects too!
Given the call stack example, I will have to refactor a bit the controller handling the upload of the crash info to accommodate it.
Just to be sure we are on the same page: the controller handling uploads atm is only triggered by POST requests, and it answers back with a redirect - the url which should be opened in the browser. If that is correct, I guess that the current crashHandler code will need some tweaks as well...
Ok for the other remarks.
@idrassi I have created a new branch which changes the interaction between VC and the CC:
- GET is used by default instead of POST
- if there are errors in the data, an html page is shown instead of a plaintext one
- the names of the query string args are the same as in the existing VC code
- the php code takes care of decoding the custom format used by VC to send call stack data
See: https://github.com/veracrypt/VeraCrypt-CrashCollector/compare/master...gggeek:gg/fix-interactin-with-vc?expand=1
Happy to send a new PR if you agree that's all correct