If you can understand how bash is parsing the backslashes, that is half the battle won. I got hopelessly stuck with command strings a few years ago. Glad the bash people have fixed some things. Remember also to check their behaviour inside nested levels of delimiters. Presumably, backslash handling will change depending on what the inner delimiter is. If everything is predictable, then a general implementation is possible -- which would be great, yay! :-) I don't have a github account, so I'll just...
For my old bash 3.1.23, N=2,6,10 failed. The others seems to be the same. On the old bash, backslash processing may have been working inconsistently with (nested) delimiter processing. The result of that was haywire and the test results were nothing I could understand. I tried looking at it a bit just now and couldn't figure out what went wrong with N=2,6,10 on the old bash. So old bash is should be regarded as deprecated. Newer bash seems to be working much better, good to know that. This scheme...
Hi Zufu, Neil. If anyone wants to update the bash/shell lexer, feel free to do so. I can't realistically work on the lexers I hack anymore... unless I find an enormous chunk of free time. Non-coding things get in the way. C++ used to be small(ish) and easier to code, but these days it's hard to keep abreast of things. I read one of Stroustrup's very long, talky books and thought -- I can only keep up with this if coding C++ is my only pasttime. Not gonna happen. The problem with bash is that its...
It's not a huge bug but it can be fixed after checking and verifying bash and zsh behaviour. I'm busy these days so anyone out there please feel free to tackle it. I can confirm that bash handles comment lines in its own function, thus bypassing line continuation processing. I took a look at bash-5.0 sources, in parse.y [3268], in read_token (command), if the '#' char is found, then it performs discard_until ('\n'); which is a simple function that zaps everything until the newline. Ray, there are...
I have updated my files to hg HEAD and tested it, works fine, thanks!
Thanks for taking a look. [[ ]] nesting (and folding) for both SCE_LUA_LITERALSTRING and SCE_LUA_COMMENT can be removed or disabled. The syntax was only present in Lua 5.0, I believe. From Lua 5.1 onwards, [==[ ]==] can be used so nesting was more like a failed experiment. I did not ever recall reading of anyone complaining that their nested [[ ]] failed when folks moved from Lua 5.0 to Lua 5.1, so the probability of anyone still relying on nested [[ ]] today is pretty miniscule. I think disabling...
My bad, heh. I have non-coding stuff that is taking all my free time, so I am quite behind on the curve these days. Here is the explanation for the issue: The incorrect highlighting is due to my attempt at supporting multiple Lua versions. Specifically, nested -[[ ]] long comments. Since that is long ago in the past, I guess nested [[ ]] should be removed. Will users care enough that it should be made an option? Perhaps not. If I am working on a patch, I'll ping here.
Here is a recent discussion at the Help-Bash list where Chet Ramey reminded folks that $(< has command semantics, and should not be regarded as an operator: https://lists.gnu.org/archive/html/help-bash/2019-09/msg00002.html The bash refman also supports this view. Look where the $() is described. IMHO anyone who actually needs highlighting inside a $() is doing it wrong. You are not supposed to write tens of lines of code in what you suppose is a syntax structure. It's a command string. IMHO the...
If Neil or other contributors thinks it's better to be in sync with the highlighting of other editors, please feel free to use this patch. I will leave it to you guys. Semantically, $() made escaping for backticks commands more convenient, so its origins is with string semantics. If other editors turned $() into a form of structural syntax, then they are wrong. It's the bash people who made the abuse of such strings so convenient. If other editors made this look like structural syntax, coders will...
For multiple here docs on the same line, or stacked here docs as some call it, I always got the impression that it sorta works by 'luck' since the lexer remembers the last here doc delimiter. I have only one test sample: print <<FOO, <<BAR; some text FOO some text BAR The end result seemed visually okay to me, but it may do funny things while you are editing it. Being a rare thing, it was not a priority item for me. We can continue discussing improvements to the perl lexer on the mailing list.
I also prefer to avoid too much distracting styling changes, so it's good. Generally, the lexer is more permissive (versus parsing the syntax correctly) and does just enough to give a certain level of styling quality. There are plenty of dusty corners cases that breaks highlighting. The lexer's approach is to scan backward and forward to disambiguate things, this avoids having to do traditional parsing especially for instance, when highlighting snippets that is not a complete program. Please feel...
Tested, looks fine to me, thanks. Here is a brief guide to qw: https://perlmaven.com/qw-quote-word Here is a snippet to test folding: # some words my @name = qw(foo bar zorg); # some words my @name = qw( foo bar zorg );
The patch works fine, thanks. Attached are some test cases for indented here docs. The lexer will be more permissive in the kinds of allowed indented here docs compared to perl, not a big deal if the here doc block is edited using the same indentation settings by the coder(s). The coder should be able to fix things if perl barfs on it since the error message is clear.
Here are my notes for perl updates to 5.30. The main breakage is indented here-docs in perl 5.26. There was some minor glitches in 5.24 which I will deal with in another ticket in the future. This ticket will be for indented here-docs. I will add more test cases while I test the patch. Once this is done, we're in good shape for perl 5.30 code.
Thank you. I had done updates up to 5.24 but was too busy with other things to focus here. I will look at it this week, learn about 5.26 stuff, and test it. Then when Neil is back from his holiday, he can push it into the repo. I just quickly scanned perl5260delta.pod and it looks like this indented here-doc is the only thing that need to be added. There does not appear to be anything in perl5280delta.pod that needs lexer updating. But if there are other items like new functions etc, let me know....
In the past week or so, I posted 2 queries on help-bash. Chet Ramey replied on another help-bash thread today, but my queries have not been answered by any actual bash dev. The devs had ample time to do so. I will wash my hands off help-bash entirely. <shrug></shrug> The case of backticks in double quoted strings can be understood in the simplest way as: The backticks part has a strong semantic relation to the outer double quoted string, because it takes on double quote style escapes. But this semantic...
Found the issue in the bash sources. Now asking help-bash why it is coded like that. Looking at the source code of bash-5.0. First, for many strings with delimiter pairs, there is a pair of push_delimiter and pop_delimiter calls bracketing a parse_matched_pair call. parse.y[4971]: /* Parse a matched pair of quote characters. */ if MBTEST(shellquote (character)) { push_delimiter (dstack, character); ttok = parse_matched_pair (character, character, character, &ttoklen, (character == '`') ? P_COMMAND...
I've posted a query on help-bash, and there's nothing useful after over 24 hours except for advice on using $(command) instead of backticks for such scenarios LOL. I can always study the source code of course, but I think it's useful to prod them about intent w.r.t. this thing and how bash coders should expect escapes to behave when nesting strings. Expectations and intent. That might help make the ref man a little bit more useful in the future. Let's see how it goes in the next few weeks, I'll post...
I've posted a query on help-bash, and there's nothing useful after over 24 hours except for advice on using $(command) instead of backticks such such scenarios LOL. I can always study the source code of course, but I think it's useful to prod them about intent w.r.t. this thing and how bash coders should expect escapes to behave when nesting strings. Expectations and intent. That might help make the ref man a little bit more useful in the future. Let's see how it goes in the next few weeks, I'll...
All behaviour being studied can be explained by the documentation except for escaping double quotes in an inner backticks string inside a double quoted string. I'm sufficiently disturbed by this that instead of coding a fix for the lexer, I will ask the bash maintainers about it first. In the meantime, all my notes are in the attached text file.
The backtick backslash thing can be decoded if we assume a two-step process. Colomban was correct, part of the solution was in the command substitution section. Needed to carefully study escape behaviour and assume the subshell does something as well. Backticks processes escapes differently versus normal escape processing, but they work one after the other in tandem: # BT SS echo `foo=\1 ; echo $foo` # OKAY 1 \1 lit1 echo `foo=\\1 ; echo $foo` # OKAY 1 \1 lit1 echo `foo=\\\1 ; echo $foo` # OKAY \1...
The backtick backslash thing can be decoded if we assume a two-step process. Colomban was correct, part of the solution was in the command substitution section. Needed to carefully study escape behaviour and assume the subshell does something as well. Backticks processes escapes differently versus normal escape processing, but they work one after the other in tandem: # BT SS echo `foo=\1 ; echo $foo` # OKAY 1 \1 lit1 echo `foo=\\1 ; echo $foo` # OKAY 1 \1 lit1 echo `foo=\\\1 ; echo $foo` # OKAY \1...
While doing the fix, I found something in my old tests that may be useful. In one of the old SF tickets, someone's code was like this: # SF #1515556 ONEPKG="^\"`echo -n "$ONELINE" | cut -f 2 -d '"'`\"" So putting the delimiter char in single quotes seems to work too. The following were tested on command line and worked: cat test_bash.txt | cut -d '"' -f 2 echo `cat test_bash.txt | cut -d '"' -f 2` echo "`cat test_bash.txt | cut -d '"' -f 2`"
Awesome, they have done some heavy lifting. Looks VERY useful, a great pathfinder to use. Back in the day it was praiseworthy elegant parsing I guess. For now, I will fix the case where a footpath can be seen, and avoid peering into those dusty and dark corners where monsters might be waiting to pounce on the unsuspecting. :-) When the Shellshock stuff came out, I rolled my eyes a lot.
Very interesting, I'll add it to my notes. I have tried the bash ref man -- there is stuff on escaping chars in there, but nothing precise enough to help with this thing. No difference without echo. Behaviour with an escape single quote is the same. Adding more backslashes, hmmm, 4 backslashes to get 1 in the output? But why? echo `foo=\" ; echo $foo` # OKAY " echo `foo=\\" ; echo $foo` # OKAY " echo `foo=\\\" ; echo $foo` # FAIL echo `foo=\\\\" ; echo $foo` # FAIL echo `foo=\\\\\" ; echo $foo` #...
I think this problem may be sufficiently common that I will try to write a VERY limited patch this weekend. Let me just cheat and do escape checking for 2 nesting levels for ONE scenario only: # I'll try to implement working escapes for this ONE case only: echo "foo `date \" escape` bar" # FAIL, " passed to cmd echo "foo `date \\" escape` bar" # OKAY # I totally don't understand the last two. Can anyone explain # how bash is parsing the line of code? echo "foo $(date \" escape) bar" # OKAY echo "foo...
Okay, I think I understand what you are trying to say now: The script line with the \\" highlights wrongly but is correct code. By deleting one backslash it highlights correctly, but obviously the shell script won't work. So yeah, that would be a bug because I have not expended any time on analyzing the behaviour of escaped characters in nested strings. Patches are welcome. As a frequent contributor to the shell script lexer, I wish to say that I don't have the time to do anything Scintilla in the...
I just noticed that going by my four test samples, your example should not execute in bash. You will need to check if it is valid bash code. Better, give us the simplest test case possible that illustrates the highlighting issue. So simplifying your sample: echo "foo \"blub\" `grep fo | cut -d \\" -f 4`" >/some_file.sh First, the two \" around blub would be escaped by the double-quoted string handler. (If there is one level of strings it is simple to do.) The problem we are interested in is the escaped...
Yeah, this was one of the things about escapes in nested strings that I skipped. This is a bug and it will probably be way down on my TODO list. Don't hold your breath, ha ha. Patches are welcome. Here are 4 samples that I tried on cygwin 32-bit bash 4.4.12 command line: echo "foo `echo foo \" bar` bar" echo "foo `echo foo \\" bar` bar" echo `echo foo \" bar` echo `echo foo \\" bar` Test 1: (fails) Test 2: foo foo " bar bar Test 3: foo " bar Test 4: foo " bar So maybe I think escape interpretation...
If you are thinking along the lines of having separate string styles for the inner nested string, or code in a command string highlighted like code, then the bash lexer has currently no support for that. I know code inside strings is a common coding idiom in bash, but it's not on my TODO list. Someone else would have to contribute that. (I like to keep string semantics as string semantics.)
Hi Wolffpack, I did a lot of the shell highlighting fixes in this area and I'm happy to see bug reports from motivated users or contributors. Backticks inside double-quoted strings should work fine, like this: echo "foo `date` bar" When testing these things, it is important to test snippets on an actual shell window to make sure they are valid bash scripts. I have checked all permutations of nested string-like thingies and implemented what is needed. All sane user scripts should highlight various...
Just tried this latest patch and it works fine on my MSYS2 compile. Now I will go learn the ILexer4 stuff in the Scintilla docs, heh. Attached is a copy of all my bash test files.
I will mostly leave this to Neil, since I am really behind on ILexer4 and stuff like that. Looks to me like behaviour will be mostly identical, I will have to read up on what some of those things do first. Anyway I will try it out on my test cases in 1-2 days and will report any issues here.
I'll leave the improvement patch in both of your good hands. I am busy with other stuff and will be lurking for a few months yet, so have fun guys. :-)
By 'keywords' I mean 'function', 'return', 'goto', etc. These cannot be mixed with normal identifiers. 'table' is a normal identifier. Hence, table.goto is illegal, table["goto"] is not illegal. Only the first keyword style has special behaviour restrictions, we reserve this for keywords. All other identifiers can mix, I think. This is what the patch implements. See the test cases. Try it out and see if you want to tweak it further.
Overthinking C++ strings. I'd get my knuckles rapped by Stroustrup (I wish). Reallocs one way or the other. Deleted them. I'd go with avoiding warnings from cppcheck. No warnings is less stressful. :-) I've pinged Paul K to see if he wants to weigh in on anything in this ticket, since he seems to be a party interested in improving the Lua lexer for his users.
Here is a patch. It does matching for identifier chains with dots and colons. It also does partial matching and breaks off in the case of keywords. Details inside. LexLua and test file attached. Other non-related behaviour checked, unchanged. In the original LexLua line 324, there is a block of code that sets word styles after the main lexer loop. I deleted it, nothing seems broken when typing out keywords at the end of the file. Maybe not needed anymore? It looks like the commit for that block of...
Nothing else to change from me. No problem with forward scanning of whitespace, it will end when GetRelative() returns zeros. Guess I am feeling a bit paranoid. Instead I will shift to #1952 and try to get that done this week.
Works for me. ForwardBytes() is only present in LexLua and LexPerl. I will check the code and also the whitespace scanning loops and report back this week, perhaps in a separate update ticket.
Oh OK, I assume you will change ForwardBytes? I will fix the "skip over spaces/tabs" while loops in LexLua, since they are an infinite loop risk too.
So I think whatever the interaction between LexLua and its runner, the "skip over spaces/tabs" while loops are dangerous with the default 2nd argument of SafeGetCharAt(). My bad. I will have to change that. Probably get the end of line position and limit scanning to that range only.
Okay, in the label scanner (LexLua.cpp:134-172), commenting out the "skip over spaces/tabs" while loops eliminates the hang. So I am cautiously guessing that it may be an issue with the lexer runner. I will spend some time on this later today etc, but I am not quite fluent with stuff beyond the lexer(s). Instead of GetRelative(), perhaps I can/should use SafeGetCharAt() with a non-space 2nd argument.
This bug appears to be present pre-version 4 too. I can reproduce the behaviour. If label has been seen/lexed then there is no hang. There is no hang if I change the filename to use another lexer. I will trace some values in the LexLua label code and then report back.
To anyone interested in implementing this: Some other systems (this SF ticket's markdown system, for example) appear to have taken the "mixing strings and statement blocks" approach, that is, command strings are highlighted like normal code. I guess this is akin to code and data mixing in HTML/JavaScript. So in the last post, the two pieces of the one double-quoted string entity are separated by normally highlighted code (probably). Sometimes it's rather unclear where a string starts and where it...
In the following example, we can mix command strings and double-quoted strings, and comments are recognized within the command strings: 1 2 3 4 5 6 7 8 9 10 11 12 13 14#!/bin/bash a=$( # comment level 1 echo 'level1' b="double_quote_start $( # comment level 2 echo 'level2' ) double_quote_end" echo $b ) echo $a echo "end of test" This illustrates the problem with the bash method of parsing strings. What rules do the double-quoted string assigned to b follow? It seems that bash switches string rules...
Here is a tested example of nested $() and backtick strings: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23#!/bin/bash a=$( # comment level 1 echo 'level1' echo $( # comment level 2 echo 'level2' ) echo ` # comment level 2b echo 'level2b' echo ` # comment level 3 echo 'level3' echo $( # comment level 4 echo 'level4' ) ` ` ) echo $a echo "end of test" The comments are all recognized as valid comments by bash. As such, inside such comment lines, string delimiter characters are not semantically...
I've looked over my notes and test cases and tried out some "what else" cases that bugged me. This may (or may not) help anyone who wishes to attack this issue. First, the bash docs mentions $(COMMAND) when I worked on this feature area. It may have changed since then. I saw $(COMMAND) and expected coders to write stuff of limited complexity inside the $(). But now it appears bash just adjusts its parsing state. Things can be very complex inside the $(), it basically allows arbitrary code. Did that...
Thank you for the bug report. This is due to the lexer historically trying to analyze command strings as a kind of string, and not something that needed full-blown nested parsing. It detects the single quote as the start of a string, because in bash there are cases where different types of strings can be nested and their escape behaviour changes accordingly. Unfortunately, now it appears that I need to scan for # comments also in command strings, and god knows what else. Not easy to do without doing...
Thanks, it's worth a shot. The more noise the better. Let's see what happens. At one point a few years ago, I was casually watching the Padre people. But as with these things, sustaining projects is difficult, often the effort could not be sustained and the thing peters out. I have been neglecting to check the last couple of Perl 5 stables (for updating Scintilla) myself... TODO list starting to look ugly.
As far as I can tell, nobody I know is working on a Perl 6 lexer. (I did a lot of the more recent Perl 5 updates.) I don't have any plans to write a Perl 6 lexer. It may be easier for a current Perl 6 user to contribute a lexer (okay, that's easier said than done, but anyway) rather than a non-user. It's kinda difficult for a non-user to code the lexer (taking many hours doing that) and not be using it. So, realistically we will probably need a contributor that's either (1) a Perl 6 user who can...
I did the highlighting for string interpolation. Highlighting for string interpolation does most of the $VAR parts correctly. I have not considered all the possible suffix types. Most samples I've seen involve braces {} etc and those can be very complex. So I have so far thought of the $VAR as the anchor point of the expression. Personally I prefer minimal extra styles for interpolation vars; generally I like my strings to look like strings. I have no objection to having string interpolation highlighting...
Gee man, you don't have to school me on pcre, I'm not a newbie. Neil has positioned Scintilla 4.0 to use C++ 14 and above. So I checked, and it's C++ 11 that introduced regex. There has been talk of C++ regex that I haven't followed closely on the mailing list, so it would not be very intelligent of me to listen to certain advice. I will be less talky in the near future due to my trying to get things coded so I may not respond to further postings on this ticket. Thank you for your interest in these...
Thank you, it looks very interesting. I'm not sure what I can do with that for now. I guess C++14 or something or other has built-in regex capabilities, I don't know if Scintilla lexers has started using such regexes, and my C++14 familiarity is zero at the moment. But I hope to get up to speed on C++14, everything else and start working on my TODO list.
I got all the postings via e-mail, no problem. Not everybody is going to use cat with HEREDOCs. Feel free to offer a patch. Currently I am not able to work on this immediately, but it's queued up.
Look at the following samples. Both are double-quoted strings. echo "foo $(cat << EOF Try "th"is" EOF ) boo" echo "<<EOF" HEREDOC is only recognized in the first one because $() is processed inside the "string". Bash switches string behaviour to backticks within the $() and the HEREDOC is processed. In the second, the << is just literal characters. Bash allows this kind of nesting, and shell coders will tend to use (cough abuse) $ constructs in "strings". One can even abuse this kind of string to...
This is a bug, but fixing it is currently not my priority. The bash lexer balances parentheses for $(), and handles single quote and double quote escaping of parentheses correctly but I missed (or ignored, ha ha) HEREDOCs. I can't fix this anytime soon because scanning these things are an ugly mess. One basically has to parse the whole thing, lexing breaks in all sorts of ways. No thanks to the uber geniuses who inflicted such syntax upon us. Historically Scintilla has implemeted styling of $() as...
No problem, Neil will close it later.
Hi AMDanischewski, I wrote the patch to fix the issue between 3.7.4 and 3.7.5. It was due to scanning for several zsh syntax features when the segment has parentheses but no whitespace inside. It should not impact array highlighting now. The code snippet looks fine on my (old) build. The aim is to keep the thing usable for zsh as long as we can. I hope you can get your 3.7.5 build working for validation purposes, while I have built SciTE on Linux in the past, I have too little experience on it to...
See: https://sourceforge.net/p/scintilla/code/ci/default/tree/lexers/LexLua.cxx Identifier handling is in lines 180-217. It has been unchanged for a long time I think, I only worked around it when adding stuff like labels. Currently, it lexes greedily, including the '.' char, then attempts to match the whole thing with a number of keyword lists. I was thinking, if "a.b" matches nothing, we can try "a" as well. Of course, all this won't be as good as truly smart editors, but it may be better than...
I will look into it, timeframe by end of next week. If anyone else wants to patch this please go ahead. This sounds reasonable, but there is one or two details I want to consider. For example: If keyword matching did not find anything for "a:b" then we should try to match "a". Currently the lexer does not check "a" after an unmatched "a.b". Should we test for the "a"? Is this extra behaviour very desirable or a minor win?
Here is an update for bash lexing to fix this issue. Attach are LexBash.cxx and test samples. A few more simple rules were added so that single element arrays is highlighted correctly. zsh globs which do not have the extended # syntax will be now highlighted using normal highlighting, a minor behaviour change that perhaps can be called an improvement. Maybe someday I will actually install zsh and do some proper tests... Code from hg HEAD, tested on gcc version 5.1.0 (tdm-1) on Win 7 Pro.
Thank you for the example, it is appreciated. My usual bash contact is with tweaking startup scripts. I have found the array part of bash syntax in the reference manual now. After reviewing my stuff, it turned out that there was a valid reason to put the globbing code in, it is meant to enable zsh scripts to be highlighted without serious breakage. zsh added some globbing stuff with various kinds of syntax, they used parentheses and the # character which caused breakage in bash. I did my best to...
It seems like a holdover from the perl lexer. Perhaps I left it there for pathspecs. Having checked the bash docs, taking it out completely should not break anything. In the meantime, people writing actual code should not be impacted by this bug.
I agree, single quotes and double quotes should be detected in a list. I will make a patch within this week. Without whitespace in between parentheses, the lexer intentionally highlights the segment as part of a path spec. It does not check enough syntax yet to do absolutely precise highlighting. However, I would be very interested in an actual code sample if you have one, rather than artificial snippets. I haven't actually seen anyone use a command list without whitespace in there, ever..
Neil, I think this ticket should be closed as invalid.
On the complaining bit, I was only refering to myself. Sorry if you thought I was insinuating something. It's to say that if you want an editor customized to very particular needs, open source or free software gives you all the tools to do it. It's not even a valid cosmetic problem. I would mark this ticket as invalid and close it. vim behaves the same as Scintillla. From the Notepad++ forum to Menno and me here, everyone with experience says it's fine, it's a string block. It's a special case that...
It's a feature that misleads normal users and has endless scope. Look at your example, you have Apache conf there too. How soon before you ask for highlighting of other languages because they break with shell highlighting? And then more? Would you pay for the feature? The HEREDOC body code will need to identify shell code versus other code. Etc etc etc, i.e. an endless time sink, I'm not kidding. If not having special-case behaviour that caters to a very small demographic is awful, then I am happy...
There is no hierarchy of styles. There is no nesting of styles. There is only one body style in HEREDOC. The whole thing from after 'SES' to the SES is the string body. So you say: "I would wish this coloring was removed" What do you expect in its place if the HEREDOC body style is removed? Do you mean the normal bash syntax highlighting? The latest vim also shows a single style for the entire HEREDOC delimited by SES. Where in the world did someone taught you to wrap things in multiple levels of...
The HEREDOC body have one style, semantically it's a multiline string style. If dark red on dark blue is the default colour scheme picked by Notepad++, then it's a very poor choice with terrible colour contrast, since configure files have plenty of such sections. I didn't pick Notepad++ colour schemes. Adjust their colour scheme. If the colour scheme is their fault, offer a better one to them. "I am not the only one in this opinion"? Benqzq (Ben again? Not you?) in that ticket provided an example...
I'm to blame for much of the bash highlighting stuff. Describe the problem clearly. "Serious styling issue" does not help any Scintilla developers here. If you have a test case, describe the issue and provide something that we can cut and paste. Looking at your samples, do you mean you want the contents of the HEREDOC highlighted like an Apache configuration file? Then autoconf people will start asking for shell highlighting in there. Then others will want other stuff automagically highlighted in...
I am a contributor, well I have updated the properties file in the past, but have...
I'm to blame for a lot of the Bash/shell highlighting updates. Anyway, I would welcome...
Here's the problem: LexMarkdown.cxx[384]: else if (sc.ch == '`' && sc.chNext != '...
Fix for perl lexer module syntax glitch and repetition operator
Updated perl subroutine prototype highlighting support
Here is a patch. I have gone the lazy way and kept <<123 and <<$var for [t]csh shells...
You're right, serves me right trying to fix it in a hurry. Now the following ##a...
In testing of Cygwin bash, it appears that << in HEREDOC form is illegal in arithmetic...
Here is a revised LexBash.cxx for the ##a and ##^A. My apologies for messing up these...
Update to LexBash to support zsh #-related highlighting glitches
Some Perl 5.22 updates for LexPerl
Fix Perl subroutine missing proto symbol, also removed dynamic array
Update Lua lexer and properties file to Lua 5.3.x
I will take a stab at fixing this during the Dec holidays. Simple whitelisting of...
Fix for HEREDOC that are syntax errors, removed dynamic arrays
Speaking as someone to blame for a lot of the bash lexer code... The Bash lexer does...
To say it unambiguously, I applied them on my own machine and they tested okay.....
I have applied and tested the patches. Looks great to me. Thanks!
Works for me, WSciTE 3.4.3. Support for the // operator was added in 2008. What version...
For most users, current shell highlighting is "good enough". It's easy for upstream...
My bad, it is already an operator, heh heh. I see it in my test cases but did limited...
Missing handling for <<< operator. Once <<< is handled as an operator, brace matching...
Tested the Sc430.exe Sc1 executable from SF on WinXP SP3 adding settings given by...