Name | Modified | Size | Downloads / Week |
---|---|---|---|
Parent folder | |||
README.md | 2024-01-12 | 2.1 kB | |
v2.3.9 source code.tar.gz | 2024-01-12 | 1.7 MB | |
v2.3.9 source code.zip | 2024-01-12 | 1.8 MB | |
Totals: 3 Items | 3.5 MB | 0 |
CursusDB Cluster and Node Bundle Stable v2.3.9
🟢 NO BREAKING CHANGES
🔥Features & Hot Fixes 🔥
-
[x] When multiple connections inserting the same document there would be duplicates because of locking logic within the cluster. This has been corrected. There is only one way to do this currently being this is a distributed system. Locking on those inserts so the search and insert can complete before the next if uniqueness if required. Take this example:
package main
import ( "fmt" cursusdbgo "github.com/cursusdb/cursusdb-go" "sync" "time" )
func main() { wg := &sync.WaitGroup{} for i := 0; i < 4; i++ { // Create 4 client connections in parallel wg.Add(1) go func(wgg sync.WaitGroup) { defer wgg.Done() var cursusdbc cursusdbgo.Client
cursusdbc = &cursusdbgo.Client{ TLS: false, ClusterHost: "0.0.0.0", ClusterPort: 7681, Username: "test", Password: "test", ClusterReadTimeout: time.Now().Add(time.Second * 60), } err := cursusdbc.Connect() if err != nil { fmt.Println(err.Error()) return } res, err := cursusdbc.Query(fmt.Sprintf(`insert into test({"x!": 33});`)) // Just an example if err != nil { fmt.Println(err.Error()) return } fmt.Println(res) cursusdbc.Close() }(wg) } wg.Wait()
}
Results:
{"collection":"test","insert":{"$id":"065efc39-2dcb-4a92-bb32-2a8b7b3a8bed","x":33},"message":"Document inserted successfully.","statusCode":2000}
{"message":"Document already exists.","statusCode":4004}
{"message":"Document already exists.","statusCode":4004}
{"message":"Document already exists.","statusCode":4004}
This is the way it's supposed to work. Regardless of it being concurrent, there has to be reliability in this regard hence the new lock.