Binary options api - comingo.gov.vn

A place to help anyone who has a uterus

This sub is dedicated to providing information and resources to those in need of services in states that have passed the heartbeat bill. Please read the info in the sidebar 💖
[link]

Anyone knows good BTC/USD binary options broker ideally with API access? /r/Bitcoin

Anyone knows good BTC/USD binary options broker ideally with API access? /Bitcoin submitted by BitcoinAllBot to BitcoinAll [link] [comments]

Good place to trade bitcoin binary options through API interface?

I did some searching on google, and this subreddit and didn't find much that looked up-to-date or trustworthy....
Just looking for a reputable trading site that offers binary options and has an API for access. And, yes, I know that binary trading is essentially pure speculation. :)
submitted by sigma_noise to BitcoinMarkets [link] [comments]

I'm looking for a Python API to a binary options trading platform.

I have an algorithm for binary options trading, but I don't feel like manually working a GUI to do my trades.
Could someone point me to a resource for executing my trades via Python?
submitted by metaperl to Python [link] [comments]

What would be a good way to have a "plugin" or "extension" system? (more info in body)

Im currently making something similar to nodejs, called Novel((.)js) contributions welcome, and one of my ideas is to let packages link with some rust code to get expanded functionality, for one example, call into an http library, since I haven't implemented that at all yet.

While sure, they could just push to upstream, but that could be a potentially lengthy process, that might not be very user friendly, So my idea is to have a config.toml file, that specifies which plugins to use, and where to fetch them from.

My current problem, Is that I don't know where to start at all, I don't know whether shared objects is the best way (I heard that they are unreliable when passing complex types on Macos), and other than shared objects, im honestly not sure what other options there are?. Any help on this question, or the project in general are appreciated. (Even if its just a pull request on the Readme to expand plans).

If you need any new info, comment and ill reply to it.
submitted by accusitive to rust [link] [comments]

Bluehole - Let's talk Wellbia/XINGCOD3 user privacy risks for the sake of transparency

For those who don't know..
XINGCODE-3 is a kernel (ring0) privillege process under xhunter1.sys owned by the Korean company Wellbia (www.wellbia.com). Unlike what people say, Wellbia isn't owned or affiliated with Tencent, however, XINGCOD3 is custom designed contractor for each individual game - mainly operating in the APAC region, many of them owned by Tencent.
XINGCODE-3 is outsourced to companies as a product modified to the specific characteristics of the game. The process runs on the highest privilegied level of the OS upon boot and is infamous for being an essential rootkit - on a malware level, it has the highest vulnerability to be abused should Wellbia or any of the 3rd Party Companies be target of an attack.
It has been heavily dissected by the hacking community as being highly intrusive and reversed engineered (although nowadays still easily bypassable by a skilled and engaged modder by created a custom Win Framework).
While most is true for a standard anti-cheating, users should be aware that XINGCOD3 able to scan the entire user memory cache, calls for DLL's, including physical state API's such as GetAsyncKeyState where it scans for the physical state of hardware peripherals, essentially becoming a hardware keylogger. Studying the long history of reverse engineering of this software has shown that Wellbia heavily collects user data for internal processing in order to create whitelists of processes and strings analyzed by evaluating PE binaries - having full access to your OS it also is known to scan and having access to user file directories and collecting and storing paths of modified files under 48 hours for the sake of detecting possible sources of bypassing.
All this data is ultimately collected by Wellbia to their host severs - also via API calls to Korean servers in order to run services such as whitelists, improve algorithm accuracy and run comparative statistics and analysis based on binaries, strings and common flags.
Usually this is a high risk for any service, including BattleEye, EasyAntiCheat, etc. but what's worrying in Wellbia, thus. Bluehole's are actually a couple of points:
(not to mention you can literally just deny the service from installing, which by itself is already a hilarious facepalm situation and nowhere does the TSL call for an API of the service)
  1. Starting off, Wellbia is a rather small development company with having only one product available on the market for rather small companies, the majority hold by Chinese government and countries where the data handling, human rights and user privacy is heavily disregarded. This makes my tinfoil hat think that the studio's network security isn't as fortified as a Sony which had abused rootkits, just due to budget investment alone. Their website is absolutely atrocious and amateur - and for an international company that deals with international stakeholders and clients it's impressive the amount of poor english, errors and ambiguous information a company has in their presentation website - there's instances where the product name is not even correctly placed in their own EULA - if a company cannot invest even in basic PR and presentation something leaves me a bitter taste that their network security isn't anything better. They can handle user binaries but network security is a completely different work. The fact that hackers are easily able to heartbeat their API network servers leaves me confirming this.
  2. This the most fun one. Wellbia website and terms conditions explicitely say that they're not held accountable should anything happen - terms that you agree and are legally binded to by default by agreeing to Bluehole's terms and conditions:" Limitations of Company Responsibility
  1. IGNCODE3 is a software provided for free to users. Users judge and determine to use services served by software developers and providers, and therefore the company does not have responsibility for results and damages which may have occurred from XIGNCODE3 installation and use.
(the fact that in 1. they can't even care to write properly the name of their product means how little they care about things in general - you can have a look at this whole joke of ToS's that I can probably put more effort in writting it: https://www.wellbia.com/?module=Html&action=SiteComp&sSubNo=5 - so I am sorry if I don't trust where my data goes into)
3) It kinda pisses me that Bluehole adopted this in the midst of the their product got released post-purchase. When I initially bought the product, in nowhere was written that the user operative system data was being collected by a third party company to servers located in APAC (and I'm one of those persons who heavily reads terms and conditions) - and the current ToS's still just touch this topic on the slightest and ambiguously - it does not say which data gets collected, discloses who and where it's hold - "third party" could be literally anyone - a major disrespect for your consumers. I'm kinda of pissed off as when I initially purchase the product in very very early stages of the game I didn't agree for any kernel level data collection to be held abroad without disclosure of what data is actually being collected otherwise it would have been a big No on the purchase. The fact that you change the rules of the game and the terms of conditions in the midst of the product release leaves me with two options Use to Your Terms or Don't Use a product I've already purchased now has no use - both changes ingame and these 3rd party implementations are so different from my initial purchase that I feel like it's the equivalent of purchasing a shower which in the next year is so heavily modified that it decides to be a toilet.
I would really like for you Bluehole to show me the initial terms and conditions to when the game was initially released and offer me a refund once you decided to change the product and terms and conditions midway which I don't agree with but am left empty handed with no choice but to abandon the product - thus making this purchase a service which I used for X months and not a good.
I really wish this topic had more visibility as I know that the majority of users are even in the dark about this whole thing and Valve and new game companies really make an effort in asserting their product's disclosures about data transparency and the limit of how much a product can change to be considered a valid product resembelance upon purchase when curating their games in the future - I literally bought a third person survival shooter and ended up with a rootkit chinese FPS.
Sincerely, a pissed off customer - who unlike the majority is concerned about my data privacy and I wish you're ever held accountable for changing sensitive contract topics such as User Privacy mid-release.
-----
EDIT:
For completely removing it from your system should you wish:

Locate the file Xhunter1.sysThis file is located in this directory: C:\Windows\xhunter1.sys

Remove the Registry Entry (regedit on command prompt)The entry is located here: HKEY_LOCAL_MACHINE > SYSTEM > ControlSet001 > Services > xhunter


For more information about XINGCOD3 and previous succesful abuses which show the malignant potential of the rootkit (kudos to Psychotropos):

- https://x86.re/blog/xigncode3-xhunter1.sys-lpe/
- https://github.com/Psychotropos/xhunter1_privesc
submitted by cosmonauts5512 to PUBATTLEGROUNDS [link] [comments]

Allow me to explain how traditional game "patching" as on consoles and even PC by game developers is not always required for games to run better on Stadia over time... Stadia engineers can do it on their own to ever improve the visual quality of individual library titles.

I've been mulling over how to write this post without it getting too wordy and just turn people away from the topic... but I feel it's important for people to consider in regards to investing in game purchases on Stadia. Even though a years-old game is ported to Stadia by a 3rd party publisher, it is not abandoned by that developer after game engine code changes are required... at that point the Stadia team can take over tweaking the performance of the game as the Linux OS Kernel / Vulkan API / eventually hardware undergo improvements over time.
I've seen heated comments/reactions in these parts when people start noticing older games suddenly looking or performing better... even though there is no sign of a game patch from the developer or announcement that such a thing has happened. (FFXV.) I'm hear to explain how this is totally possible.
(Disclaimer: I've been a gaming platform tester for 13 years, a platform based from GenToo Linux Kernel. This year I have just branched directly into OS Kernel / Package testing itself.)
A software package / game is made up of not only game code and pretty graphics. Another fairly big piece of the puzzle is configuration files. Especially in the Linux world. Another thing about Linux is it never sits still. It's open source and ever growing and improving through constant iteration by engineers around the world. This includes the Vulkan API itself. Stadia's platform and Vulkan API has likely undergone dozens if not hundreds of iterations in the past year alone. It is CONSTANTLY improving, even if ever so slightly.
For comparison, a gaming console is a completely sealed environment. Not only does the hardware never change, but the OS and base Platform has very little wiggle room for improvement. Most significant improvements will happen within the first few years of a new console's life. But often the gains from that never spill over into the games themselves... but rather the Platform's UI interface and menu's, such as adding new features outside of the game. For things to change about a game at all, a patch MUST be delivered to the console. There is no other option, because the config files of individual games can't be touched in any other way.
On PC you often have access to these config files (at the devoloper's discretion of what they choose to expose of course). Many people know of how you can start digging into these settings and adjust number values and flip on/off flags to affect your game. But these configuration files have default values set by the developers that are expected to never really be touched by the players... so even when they do want to change something for the benefit of everyone, they need to issue a game patch.
Now on a Cloud platform such as Stadia, when a game is delivered by a developer to the platform, of course their game engine code (binaries) cannot be altered by anyone but the game developer themselves as usual... so if there is bugs in code, or game engine code improvements that can be done, the developer must deploy a game patch to make these changes, as we have seen and people would expect. However the configuration files which define how the game performs on the platform's hardware are completely exposed... and this is what the Stadia team most likely has FULL control over. So if the Vulkan API gets some improvements or code optimizations, and they can squeeze a little bit more performance out of the game, the Stadia team can go into these config files and adjust things accordingly.
Not only configurations but also the graphical assets themselves (media) can be swapped with more high-rez assets as well. Its also very possible that the publishers/devs provide Stadia with multiple different versions of quality of their media. Some higher rez textures that can be swapped in if the platform is optimized enough to handle them, etc.
Why would the Stadia team take on the management of all the games in such a way? Because it's absolutely in their best interest too. This is also a big favor towards the game publisher as well... Stadia does work to improve the game ultimately generating better reception and sales of these games producing revenue for both Stadia and the publisher.
Cloud platforms are a new animal in the gaming world. How the games are maintained over time can be done very differently than what we are used to with console and PC.
So naturally this turned into a wall of text but I couldn't do it any other way... some things simply need to be explained as clearly as possible to get across.
ltdr: As Stadia platform / Vulkan API improve constantly over time, Stadia engineers can tweak the configurations of ANY game to make them look/run better without the developers needing to be involved and patch the games.
submitted by Z3M0G to Stadia [link] [comments]

Why I created a package and project manager for C (in Rust ofc !)

A few months ago, I was wondering why we hoard Makefiles and why it is so painful to use an external library in a C project.
So I had this idea : Creating a project manager & build tool for the C programming language.
I started to write a piece of code in C and it was not functionning properly (Cause I'm one of the worst C developers in this world) but i continued 'till we cannot run that thing.
At the same time, I was learning Rust ; so, I decided to try to rewrite the whole project in that language.
After a few weeks of rewriting, I had a correct product. The 08/10/2020, Wanager 1.0 was released. It had only a few features : project creation and reinitializing, project build and run and header creation.
At that point, someone called Lockeer told me that it will be cool if we can manage libraries.
So I wrote a simple system to install libraries hosted on my vps, with a submission system based on mailing. It was working, but limited because of vps bandwidth and the complexity of submiting by email.
So SuperFola poped up :
https://user-images.githubusercontent.com/61330081/96449113-aa418a80-1214-11eb-97e8-32c7afd86ff8.png
At first time, I decided, as he advised me, to use the github api that produces a tar archive of the repo. I stucked on that for weeks because the command I was running was producing a corrupted file.
After raging on that problem, I realised that I'll gain some portability and time with cloning directly with a git command.
It worked good, so, everything is fine !
But Il_totore opened an issue :
https://user-images.githubusercontent.com/61330081/96449715-8df21d80-1215-11eb-9a22-588c77ce9870.png
So I made python support for build scripts with minimal version of 3.5.x and allowed path specification.
After that, on his advice, I made kind of Python API to have nice build scripts and it produced that :
```py from wngbuild import * # Import all from wngbuild module
build = BuildProfile(files="src/*.c",output="build/custom/prog.exe" ) # setup a build profile that will compile all files in src/ and place the binary in build/custom/prog.exe build.cc = "C:\MinGW\bin\gcc.exe" # Setup the compiler (optional, by default "gcc") build.flags = "-W -Wall -Werror -Wextra" # Setup the flags that the command will be run with (optional)
build.run() # Run the compilation command build.runOutput() # Run the binary produced by the compilation command (Will raise an error if the compilation command fails) ```
https://github.com/Wmanage/wng
It is still WIP, there are loads of features that I can add to it but I will be more very happy to answer your questions or help you use it.
Thanks for reading and have a nice day.
submitted by Wafelack to rust [link] [comments]

PCSX2 official Arch Linux package not recommended

Arch Linux's community package for the emulator PCSX2 which is on their official multilib repositories has sparked some questionable changes in the way they have compiled the binary. I chased them up about them defining OPENCL_API=ON, DISABLE_ADVANCE_SIMD=ON and EGL_API=OFF. After making some changes they have went ahead and built and distributed the 64-bit version of the emulator prematurely. Along with this, it has been brought up from the stable releases which it has always followed up until now.
With these changes as well as future unwanted changes, I would like to say that for the foreseeable future we would like to NOT recommend using the pcsx2 package in Arch Linux repositories. Instead, please use the pcsx2-git package on the AUR which is maintained by weirdbeardgame kenshen (a contributor to the project) with help from myself and others. The AUR package is much more cared for the way the emulator developers would prefer. If you would like a package which distributes a precompiled binary, please voice your opinion. If there is enough interest, we might get one going. If the package maintainer for Arch Linux's repositories reads this, please consider looking at our PKGBUILD while following it much more closely in your version and keeping your version down at the stable 1.6 release.
Thank you
EDIT: Add explanation for the SIMD build flag
EDIT-2: I want to clarify that this is in the testing repository and they haven't pushed this to the main repositories yet
submitted by JibbityJobbity to linux_gaming [link] [comments]

Zabbix 5.2 is released! Some more details.

The new major release comes with an impressive list of new features, improvements and out of the box integrations:
Zabbix offers out of the box official integrations with:
Other major improvements:
Official packages are available for:
One-click deployment is available for the following cloud platforms:
and much more!
Read release notes for a complete list of improvements: https://www.zabbix.com/rn/rn5.2.0
In order to upgrade you just need to download and install new binaries (server, proxy and Web UI). When you start Zabbix Server it will automatically upgrade your database. Zabbix agents are backward compatible therefore no need to install new agents, you can do it anytime later if needed.
submitted by alexvl to zabbix [link] [comments]

[Update] CCSupport 1.3 - Module Providers

CCSupport 1.3 is out now on https://opa334.github.io (also submitted to BigBoss) and adds a new feature that module developers can utilize.
Previously CCSupport only loaded regular third party modules. Every single CC module added would need it's own bundle / binary. This made certain things, such as giving the user an option to specify how many modules of a certain module he wants, impossible (unless you planned on doing some crazy shenanigans like FlipConvert).
Well, long story short, this updates adresses that limitation by adding an additional API that allows developers to create module providers. A module provider can provide an arbitary amount of modules, here is a video of my example provider in action (note that this specific provider provides the same module multiple times, but this is not required at all, you could make a provider provide a 2x2 app launcher module, a network module and some random switch if you wanted).
For developers interested, module providers are documented here and a new theos template for providers has been released here.
Have fun and follow me on twitter!
submitted by opa334 to jailbreak [link] [comments]

gdbstub 0.4: An ergonomic, #![no_std] implementation of the GDB Remote Serial Protocol in Rust

crates.io | docs | repo
An ergonomic and easy-to-integrate implementation of the GDB Remote Serial Protocol in Rust, with full #![no_std] support. gdbstub makes extensive use of Rust's powerful type system + generics to enforce protocol invariants at compile time, minimizing the number of tricky protocol details end users have to worry about.
A lot has changed since my last post announcing gdbstub 0.2!
Version 0.4 includes a major API overhaul, tons of internal optimizations, and a slew of new GDB protocol features, making it the fastest, leanest, and most featureful release of gdbstub yet!
It's been absolutely incredible having so many people contribute to the library, and seeing gdbstub being used in all sorts of cool projects. Thank you for all the support!
By the way, if you're taking part in Hacktoberfest this year, there are plenty of ways to contribute to gdbstub. There's a whole laundry list of protocol extensions and new architectures to support, so check out the issue tracker and consider lending a hand!
Cheers!
submitted by daniel5151 to rust [link] [comments]

ESP8266 development on OpenBSD with platformio

Hi,
I got platformio running on OpenBSD-current (it should work with older releases, too) and was able to compile a firmware for my ESP8266 NodeMCUv2. I haven't uploaded it to the board, yet, since it's still somewhere in the attic. Will test this soon and update this post. I guess it'll just work.

setup

You have to install the packages arduino-esp8266 and py3-pip:
# pkg_add arduino-esp8266 py3-pip 
And install platformio via pip:
# pip install platformio 

create project

The next steps were done as non-root user.
Now, create your project folder:
$ mkdir -p ~/code/myproject $ cd ~/code/myproject 
and initialize a platformio project:
$ pio init 
It should look something like this:
$ ls -la total 64 drwxr-xr-x 6 lotherk lotherk 512 Nov 1 09:02 . drwxr-xr-x 28 lotherk lotherk 1536 Nov 1 09:02 .. -rw-r--r-- 1 lotherk lotherk 5 Nov 1 09:02 .gitignore drwxr-xr-x 2 lotherk lotherk 512 Nov 1 09:02 include drwxr-xr-x 2 lotherk lotherk 512 Nov 1 09:02 lib -rw-r--r-- 1 lotherk lotherk 364 Nov 1 09:02 platformio.ini drwxr-xr-x 2 lotherk lotherk 512 Nov 1 09:02 src drwxr-xr-x 2 lotherk lotherk 512 Nov 1 09:02 test 
Now start writing code in src/main.cpp:
#include  #include  void setup() { } void loop() { } 
And edit platformio.ini:
[platformio] default_envs = nodemcuv2 [env:nodemcuv2] platform = espressif8266 framework = arduino board = nodemcuv2 
Please see the official Documention for which platform, framework or board you might need. Remember, this is all for esp8266 chips.

first build

It's now time for the first build, which will very likely fail:
$ pio run 
This will give you:
Processing nodemcuv2 (platform: espressif8266; framework: arduino; board: nodemcuv2) -------------------------------------------------------------------------------- Tool Manager: Installing toolchain-xtensa @ ~2.40802.191122 Error: Could not find the package with 'toolchain-xtensa @ ~2.40802.191122' requirements for your system 'openbsd_amd64' 
Researching this error led me to https://github.com/trombik/platformio-freebsd-toolchain-xtensa/. What @trombik did was creating a fake platformio package with symlinks to the right files on the system. In his case it was FreeBSD but I tried it anyway. It mostly worked out of the box, I just had to symlink the xtensa-lx106-elf-* binaries from /uslocal/bin into the package. I created my own fake package for OpenBSD at https://github.com/lotherk/platformio-openbsd-toolchain-xtensa.
Clone the repository and place it to ~/.platformio/packages/toolchain-xtensa. It is important to name the folder toolchain-xtensa! Ensure that xtensa is installed, but it should come with the arduino-esp8266 package:
$ pkg_info |grep xtensa xtensa-lx106-elf-binutils-2.32 binutils for xtensa-lx106-elf cross-development xtensa-lx106-elf-gcc-5.2.0 gcc for xtensa-lx106-elf cross-development xtensa-lx106-elf-newlib-2.1.0p0 newlib for xtensa-lx106-elf cross-development 
Now change to the directory and run init.sh, which will create all the symlinks you need.
$ cd ~/.platformio/packages/toolchain-xtensa/ $ ./init.sh 
Back to our project and re-run pio:
$ cd ~/code/myproject $ pio run 
This time it does a lot more, but now fails complaining it can't find tools-esptool:
Processing nodemcuv2 (platform: espressif8266; framework: arduino; board: nodemcuv2) ----------------------------------------------------------------------------------- Tool Manager: Installing framework-arduinoespressif8266 @ ~3.20704.0 Tool Manager: Warning! More than one package has been found by framework-arduinoespressif8266 @ ~3.20704.0 requirements: - platformio/framework-arduinoespressif8266 @ 3.20704.0 - jason2866/framework-arduinoespressif8266 @ 2.7.4.1 - tasmota/framework-arduinoespressif8266 @ 2.7.4.3 Tool Manager: Please specify detailed REQUIREMENTS using package owner and version (showed above) to avoid name conflicts Unpacking [####################################] 100% Tool Manager: framework-arduinoespressif8266 @ 3.20704.0 has been installed! Tool Manager: Installing tool-esptool @ <2 Tool Manager: Warning! More than one package has been found by tool-esptool @ <2 requirements: - platformio/tool-esptool @ 1.413.0 - volcas/tool-esptool @ 1.413.1 Tool Manager: Please specify detailed REQUIREMENTS using package owner and version (showed above) to avoid name conflicts Error: Could not find the package with 'tool-esptool @ <2' requirements for your system 'openbsd_amd64' 
Fortunately this is as easy as fixing toolchain-xtensa. I've created a fake package for esptool aswell. esptool must be installed, tho. Which it already should be because of the arduino-esp8266 package. Clone https://github.com/lotherk/platformio-openbsd-tool-esptool to ~/.platformio/packages/tool-esptool (naming is important...) and run init.sh as you've done with the toolchain-xtensa package.
Rerun pio and it should compile now:
$ cd ~/code/myproject $ pio run Processing nodemcuv2 (platform: espressif8266; framework: arduino; board: nodemcuv2) -------------------------------------------------------------------------------- Verbose mode can be enabled via `-v, --verbose` option CONFIGURATION: https://docs.platformio.org/page/boards/espressif8266/nodemcuv2.html PLATFORM: Espressif 8266 (2.6.2) > NodeMCU 1.0 (ESP-12E Module) HARDWARE: ESP8266 80MHz, 80KB RAM, 4MB Flash PACKAGES: - framework-arduinoespressif8266 3.20704.0 (2.7.4) - tool-esptool 0.1.0 - tool-esptoolpy 1.20800.0 (2.8.0) - toolchain-xtensa 2.40802.191122 (4.8.2) LDF: Library Dependency Finder -> http://bit.ly/configure-pio-ldf LDF Modes: Finder ~ chain, Compatibility ~ soft Found 29 compatible libraries Scanning dependencies... Dependency Graph |--  1.0 Building in release mode Compiling .pio/build/nodemcuv2/src/main.cpp.o Generating LD script .pio/build/nodemcuv2/ld/local.eagle.app.v6.common.ld Compiling .pio/build/nodemcuv2/lib74a/ESP8266WiFi/BearSSLHelpers.cpp.o Compiling .pio/build/nodemcuv2/lib74a/ESP8266WiFi/CertStoreBearSSL.cpp.o Compiling .pio/build/nodemcuv2/lib74a/ESP8266WiFi/ESP8266WiFi.cpp.o Compiling .pio/build/nodemcuv2/lib74a/ESP8266WiFi/ESP8266WiFiAP.cpp.o Compiling .pio/build/nodemcuv2/lib74a/ESP8266WiFi/ESP8266WiFiGeneric.cpp.o Compiling .pio/build/nodemcuv2/lib74a/ESP8266WiFi/ESP8266WiFiGratuitous.cpp.o Compiling .pio/build/nodemcuv2/lib74a/ESP8266WiFi/ESP8266WiFiMulti.cpp.o Compiling .pio/build/nodemcuv2/lib74a/ESP8266WiFi/ESP8266WiFiSTA-WPS.cpp.o Compiling .pio/build/nodemcuv2/lib74a/ESP8266WiFi/ESP8266WiFiSTA.cpp.o Compiling .pio/build/nodemcuv2/lib74a/ESP8266WiFi/ESP8266WiFiScan.cpp.o Compiling .pio/build/nodemcuv2/lib74a/ESP8266WiFi/WiFiClient.cpp.o Compiling .pio/build/nodemcuv2/lib74a/ESP8266WiFi/WiFiClientSecureAxTLS.cpp.o Compiling .pio/build/nodemcuv2/lib74a/ESP8266WiFi/WiFiClientSecureBearSSL.cpp.o Compiling .pio/build/nodemcuv2/lib74a/ESP8266WiFi/WiFiServer.cpp.o Compiling .pio/build/nodemcuv2/lib74a/ESP8266WiFi/WiFiServerSecureAxTLS.cpp.o Compiling .pio/build/nodemcuv2/lib74a/ESP8266WiFi/WiFiServerSecureBearSSL.cpp.o Compiling .pio/build/nodemcuv2/lib74a/ESP8266WiFi/WiFiUdp.cpp.o Archiving .pio/build/nodemcuv2/libFrameworkArduinoVariant.a Indexing .pio/build/nodemcuv2/libFrameworkArduinoVariant.a Compiling .pio/build/nodemcuv2/FrameworkArduino/Crypto.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/Esp-frag.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/Esp-version.cpp.o Archiving .pio/build/nodemcuv2/lib74a/libESP8266WiFi.a Indexing .pio/build/nodemcuv2/lib74a/libESP8266WiFi.a Compiling .pio/build/nodemcuv2/FrameworkArduino/Esp.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/FS.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/FSnoop.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/FunctionalInterrupt.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/HardwareSerial.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/IPAddress.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/MD5Builder.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/Print.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/Schedule.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/StackThunk.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/Stream.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/StreamString.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/Tone.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/TypeConversion.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/Updater.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/WMath.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/WString.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/abi.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/base64.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/cbuf.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/cont.S.o Compiling .pio/build/nodemcuv2/FrameworkArduino/cont_util.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/core_esp8266_app_entry_noextra4k.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/core_esp8266_eboot_command.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/core_esp8266_features.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/core_esp8266_flash_quirks.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/core_esp8266_flash_utils.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/core_esp8266_i2s.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/core_esp8266_main.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/core_esp8266_noniso.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/core_esp8266_phy.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/core_esp8266_postmortem.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/core_esp8266_si2c.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/core_esp8266_sigma_delta.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/core_esp8266_spi_utils.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/core_esp8266_timer.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/core_esp8266_waveform.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/core_esp8266_wiring.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/core_esp8266_wiring_analog.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/core_esp8266_wiring_digital.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/core_esp8266_wiring_pulse.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/core_esp8266_wiring_pwm.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/core_esp8266_wiring_shift.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/crc32.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/debug.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/flash_hal.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/gdb_hooks.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/heap.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/libb64/cdecode.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/libb64/cencode.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/libc_replacements.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/sntp-lwip2.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/spiffs/spiffs_cache.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/spiffs/spiffs_check.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/spiffs/spiffs_gc.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/spiffs/spiffs_hydrogen.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/spiffs/spiffs_nucleus.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/spiffs_api.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/sqrt32.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/time.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/uart.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/umm_malloc/umm_info.c.o Compiling .pio/build/nodemcuv2/FrameworkArduino/umm_malloc/umm_integrity.c.o Compiling .pio/build/nodemcuv2/FrameworkArduino/umm_malloc/umm_local.c.o Compiling .pio/build/nodemcuv2/FrameworkArduino/umm_malloc/umm_malloc.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/umm_malloc/umm_poison.c.o Archiving .pio/build/nodemcuv2/libFrameworkArduino.a Indexing .pio/build/nodemcuv2/libFrameworkArduino.a Linking .pio/build/nodemcuv2/firmware.elf Retrieving maximum program size .pio/build/nodemcuv2/firmware.elf Checking size .pio/build/nodemcuv2/firmware.elf Building .pio/build/nodemcuv2/firmware.bin Advanced Memory Usage is available via "PlatformIO Home > Project Inspect" RAM: [=== ] 32.7% (used 26776 bytes from 81920 bytes) Flash: [== ] 24.6% (used 256780 bytes from 1044464 bytes) Creating BIN file ".pio/build/nodemcuv2/firmware.bin" using "/home/lotherk/.platformio/packages/framework-arduinoespressif8266/bootloaders/eboot/eboot.elf" and ========================= [SUCCESS] Took 70.35 seconds ========================= 
Et voila, you've compiled a firmware for your esp8266 chip on OpenBSD.
Uploading the firmware should only be a matter of configuring the right serial port in platformio.ini. As soon as I get mine from the attic, I will try it and update this post.

Edit: spelling
submitted by lotherk to openbsd [link] [comments]

The first official release of the ZOIA Librarian app is now available!

Version 1.0 is now out for Windows 10, Mac OS X, and Linux (Ubuntu)! It can be downloaded here https://github.com/meanmedianmoge/zoia_lib - see the "How to Install" section.
EDIT: Mac 1.0 release has been updated (see the link above to download the zip), and it should open successfully upon double-clicking the .app file! Apologies for any inconvenience.
If you have a GitHub account, feel free to create an issue regarding any performance issues you encounter. If you don't have a GitHub account, send feedback and bugs to me at [[email protected]](mailto:[email protected]).
Overview and tutorial video: https://www.youtube.com/watch?v=JLOUrWtG1Pk
User Manual: https://github.com/meanmedianmoge/zoia_lib/blob/mastedocumentation/User%20Manuals/ZOIA%20Librarian%20-%20User%20Manual%20-%20Version%201.0.pdf
Changelog is below. Special thanks to our beta testers, contributors, and supporters for the interest in this application!
Patch Notes Version 1.0 (September 25, 2020)
New Features - Finalized ZOIA binary parsing implementation. Again, massive thanks to djigneo/apparent1 for the initial C# code. As of this release, all features of the patch are fully exposed and can be decoded into a JSON object for further use. - Patch visualizer has been updated with more information to help you understand patches at a quick-glance. - Added the ability to search and sort for patches by author name. This applies to Local and Bank tabs only. PS tab author search and sort will not be supported at this time due to the API structure. - Updated patch importing so that patches with near-identical names are merged upon import (instead of strictly identical names). - Updated the behavior of the SD and Bank tables so that multiples can be selected and moved in different ways: Hold Shift and click the start and end patches to move and/or Hold Ctrl/Cmd and click on each patch you'd like to move. - Patches can now be moved into a bank in the following ways: Dragging single or multiple selections (similar options as above) at once and/or Clicking the Add to Bank button for single selections at a time. - Added a Clear Bank button to wipe the bank tables clean. - Added a new Help toolbar which allows users to access documentation and useful ZOIA resources. These will display in the PS tab browser panel. You can also search for different commands/shortcuts. - Added a Reset UI menu option in the event that users mangle the UI panels or tables. - Updated the light theme colors to give it a more muted look. - Alternating row colors is now a saved preference. It will save whatever is the current setting upon closing the application. - Added a step-by-step guide for how to compile the application from source for developers, contributors or users who were unable to open the beta builds. - Added our first Linux build! We aim to support the latest stable version of Ubuntu going forward. If you are a Linux user who prefers other distributions, please contact me.
Fixes - Fixed an issue that occurred while importing a version history (Mac). - Removed the threads used with menu action multi-import functions (Mac temporary fix). - Fixed an issue where the dates of imported patches were back-dated to the history of the SD card. - Fixed an issue with SD card imported files having mangled filenames (Windows). This also caused patches to not export properly. - Fixed an issue where changing the font/font size didn't apply to themes or buttons.
Known Issues - Certain patch binaries cannot be fully decoded due to being saved on deprecated ZOIA firmware. - Saved UI preferences are not being applied correctly for the Local Storage tab - specifically the vertical splitter (Mac).
Future Plans - Expansion view of routing for patch visualizer. Right now, the connections are displayed on a module-block level, but not from a general patch level. The expander would provide an in-depth visualization of audio and CV routing, likely to be displayed in a new tab. - Extend the binary decoder methods into an API for other applications/programs to utilize. - Simplify and automate code structure for releases (currently, a minimal-working version of the code needs to be created for the app-building process). - Allow for custom themes/colors in the UI. - Actually fix threading issues associated with menu action multi-imports.
As always, we welcome any feedback you may have. Thanks for being awesome :) - Mike M.
submitted by meanmedianmoge to ZOIA [link] [comments]

Best way to use Whatsapp?

I've just started university, and all of my housemates want to use Whatsapp for the group chat so there's a lot of pressure to install it. This is not really the hill I want to die on.
I've ordered a burner SIM, but don't have a secondary phone.
For sandboxing, I'm thinking of running it in an Android x86 VM. Android Studio's emulators could be an option, but it's proprietary. I thought Debian used to maintain a free version of Android Studio, but now they're just directing people to Google's website to download binaries from there. Is Android x86 a good idea then?
Does Whatsapp need Google play services these days, or have any other weird API requirements that will prevent it from running on regular Android?
submitted by 770814277adsf to privacy [link] [comments]

I created a mathematically optimal team generator!

Hi all,

I've been playing FPL for a few years now, and by no means am I an expert. However, I like math and particularly optimization problems. And a few days ago I thought to use my math knowledge for something useful.

My goal was to start from some metric that predicts the amount of points a player will score (either in the next gameweek, or over the whole season). From that metric, I wanted to generate the mathematically optimal team, aka choose the 15 players that will give me the most points, while staying within budget. I realized this is a constrained knapsack problem, which can be solved by dedicated solvers as long as the optimization problem is properly defined. Note that while I make a big assumption by choosing some metric from which I start, the solver actually finds the most optimal team, without any prior assumptions about best formation, budget spread, etc!

(Warning: from this point onward it gets kinda math-y, so turn back or skip ahead to the results if that's not your thing)

MATH

So first, the optimization variable needed to be defined. For this purpose I introduced a binary variable x which is basically a vector of all players in the game, where a value of 1 indicates that player is part of our dream team and a 0 means it's not.

Secondly, an objective function needs to be defined, which is what we want to maximize. In our case, this is the total expected points our dreamteam will score. I included double captain points and reduced points for bench players here. The objective function is linear, which is nice since it is convex (an important property which makes solving the problem much easier, and is even required for most solvers).

Lastly are the constraints. Obviously, there is the 100M budget constraint. Then we also want the required amount of goalkeepers, defenders, midfielders and forwards. Then we need to keep in mind the formation constraints, and lastly are the max 3 players per club constraints. Luckily, these are all linear (so convex) constraints.

I solved this problem using CVX for MATLAB, particularly with the Gurobi solver since it allows mixed integer programs. It tries to find the optimal variable x* which maximizes the objective function while staying within the constraints. And amazingly, it actually comes up with solutions!

RESULTS
So like I said before, I need to start from some metric that indicates how many points a player will score (if you have any recommendations, let me know!). For a lack of better options, I chose two different metrics:

  1. The total points scored by the player last year
  2. The expected points scored by the player in the next gameweek (ep_next in the FPL API, for fellow nerds)

Obviously, both metrics are not perfect. The first one doesn't take into account transfers, promoted teams, injuries, fixtures, position changes etc. However, it should work decent for making a set-and-forget team with proven PL players.

The second metric seems to have a problem with overrating bench players of top PL teams such as Ozil, Minamino, etc. I'm not really sure why, but it's a metric taken directly from FPL with undisclosed underlying math so it's not my problem. Also, keep in mind that since the first gameweek does not feature City/Utd/Burnley/Villa players, this metric predicts them to score 0 points so they won't feature in the optimal team.

Team 1: Last year's dreamteam
Bench:

Team 2: Next week's dreamteam
Bench:

Both teams cost exactly 100M.

At first glance, there are some obvious flaws with both teams, but most of them are because the metric used as input is flawed, as I explained before. Lundstram is obviously a much worse choice this year due to various reasons, and Team 2 has some top 6 players which are very much not nailed.

However. What I think is interesting is that both teams have only 2 starting midfielders. This despite the trend of people stacking premium midfielders. On the other hand, premium defenders seem to be very good value, and the importance of TAA and Robertson is underlined. Similarly, near-premium forwards in the 7.5-10 price range seem to be a good choice.

CONCLUSION
I'm quite content with my optimal team generator. Using it, I don't need to use vague value metrics such as VAPM. The input can be any metric which relates simply to how many points a player will score. Choices about relative value of e.g. defenders against midfielders, formation, budget spread etc. are all taken out of my hands with this team generator. The team that is generated is only as good as the metric used as input. But given a certain input metric, you can be sure that the generated team is optimal.

I would gladly share my MATLAB code if there is any interest. Also, I'm open to suggestions on how to extend it. EDIT: Here it is.


(Tiny disclaimer: Remember when I said: "without any prior assumptions"? That is a lie. There is one tiny assumption I made, which is how often bench players are subbed on. I guesstimated this to happen approximately 10% of the time.)
submitted by nectri42 to FantasyPL [link] [comments]

Gridcoin 5.0.0.0-Mandatory "Fern" Release

https://github.com/gridcoin-community/Gridcoin-Research/releases/tag/5.0.0.0
Finally! After over ten months of development and testing, "Fern" has arrived! This is a whopper. 240 pull requests merged. Essentially a complete rewrite that was started with the scraper (the "neural net" rewrite) in "Denise" has now been completed. Practically the ENTIRE Gridcoin specific codebase resting on top of the vanilla Bitcoin/Peercoin/Blackcoin vanilla PoS code has been rewritten. This removes the team requirement at last (see below), although there are many other important improvements besides that.
Fern was a monumental undertaking. We had to encode all of the old rules active for the v10 block protocol in new code and ensure that the new code was 100% compatible. This had to be done in such a way as to clear out all of the old spaghetti and ring-fence it with tightly controlled class implementations. We then wrote an entirely new, simplified ruleset for research rewards and reengineered contracts (which includes beacon management, polls, and voting) using properly classed code. The fundamentals of Gridcoin with this release are now on a very sound and maintainable footing, and the developers believe the codebase as updated here will serve as the fundamental basis for Gridcoin's future roadmap.
We have been testing this for MONTHS on testnet in various stages. The v10 (legacy) compatibility code has been running on testnet continuously as it was developed to ensure compatibility with existing nodes. During the last few months, we have done two private testnet forks and then the full public testnet testing for v11 code (the new protocol which is what Fern implements). The developers have also been running non-staking "sentinel" nodes on mainnet with this code to verify that the consensus rules are problem-free for the legacy compatibility code on the broader mainnet. We believe this amount of testing is going to result in a smooth rollout.
Given the amount of changes in Fern, I am presenting TWO changelogs below. One is high level, which summarizes the most significant changes in the protocol. The second changelog is the detailed one in the usual format, and gives you an inkling of the size of this release.

Highlights

Protocol

Note that the protocol changes will not become active until we cross the hard-fork transition height to v11, which has been set at 2053000. Given current average block spacing, this should happen around October 4, about one month from now.
Note that to get all of the beacons in the network on the new protocol, we are requiring ALL beacons to be validated. A two week (14 day) grace period is provided by the code, starting at the time of the transition height, for people currently holding a beacon to validate the beacon and prevent it from expiring. That means that EVERY CRUNCHER must advertise and validate their beacon AFTER the v11 transition (around Oct 4th) and BEFORE October 18th (or more precisely, 14 days from the actual date of the v11 transition). If you do not advertise and validate your beacon by this time, your beacon will expire and you will stop earning research rewards until you advertise and validate a new beacon. This process has been made much easier by a brand new beacon "wizard" that helps manage beacon advertisements and renewals. Once a beacon has been validated and is a v11 protocol beacon, the normal 180 day expiration rules apply. Note, however, that the 180 day expiration on research rewards has been removed with the Fern update. This means that while your beacon might expire after 180 days, your earned research rewards will be retained and can be claimed by advertising a beacon with the same CPID and going through the validation process again. In other words, you do not lose any earned research rewards if you do not stake a block within 180 days and keep your beacon up-to-date.
The transition height is also when the team requirement will be relaxed for the network.

GUI

Besides the beacon wizard, there are a number of improvements to the GUI, including new UI transaction types (and icons) for staking the superblock, sidestake sends, beacon advertisement, voting, poll creation, and transactions with a message. The main screen has been revamped with a better summary section, and better status icons. Several changes under the hood have improved GUI performance. And finally, the diagnostics have been revamped.

Blockchain

The wallet sync speed has been DRASTICALLY improved. A decent machine with a good network connection should be able to sync the entire mainnet blockchain in less than 4 hours. A fast machine with a really fast network connection and a good SSD can do it in about 2.5 hours. One of our goals was to reduce or eliminate the reliance on snapshots for mainnet, and I think we have accomplished that goal with the new sync speed. We have also streamlined the in-memory structures for the blockchain which shaves some memory use.
There are so many goodies here it is hard to summarize them all.
I would like to thank all of the contributors to this release, but especially thank @cyrossignol, whose incredible contributions formed the backbone of this release. I would also like to pay special thanks to @barton2526, @caraka, and @Quezacoatl1, who tirelessly helped during the testing and polishing phase on testnet with testing and repeated builds for all architectures.
The developers are proud to present this release to the community and we believe this represents the starting point for a true renaissance for Gridcoin!

Summary Changelog

Accrual

Changed

Most significantly, nodes calculate research rewards directly from the magnitudes in EACH superblock between stakes instead of using a two- or three- point average based on a CPID's current magnitude and the magnitude for the CPID when it last staked. For those long-timers in the community, this has been referred to as "Superblock Windows," and was first done in proof-of-concept form by @denravonska.

Removed

Beacons

Added

Changed

Removed

Unaltered

As a reminder:

Superblocks

Added

Changed

Removed

Voting

Added

Changed

Removed

Detailed Changelog

[5.0.0.0] 2020-09-03, mandatory, "Fern"

Added

Changed

Removed

Fixed

submitted by jamescowens to gridcoin [link] [comments]

Red Hat OpenShift Container Platform Instruction Manual for Windows Powershell

Introduction to the manual
This manual is made to guide you step by step in setting up an OpenShift cloud environment on your own device. It will tell you what needs to be done, when it needs to be done, what you will be doing and why you will be doing it, all in one convenient manual that is made for Windows users. Although if you'd want to try it on Linux or MacOS we did add the commands necesary to get the CodeReady Containers to run on your operating system. Be warned however there are some system requirements that are necessary to run the CodeReady Containers that we will be using. These requirements are specified within chapter Minimum system requirements.
This manual is written for everyone with an interest in the Red Hat OpenShift Container Platform and has at least a basic understanding of the command line within PowerShell on Windows. Even though it is possible to use most of the manual for Linux or MacOS we will focus on how to do this within Windows.
If you follow this manual you will be able to do the following items by yourself:
● Installing the CodeReady Containers
● Updating OpenShift
● Configuring a CodeReady Container
● Configuring the DNS
● Accessing the OpenShift cluster
● Deploying the Mediawiki application
What is the OpenShift Container platform?
Red Hat OpenShift is a cloud development Platform as a Service (PaaS). It enables developers to develop and deploy their applications on a cloud infrastructure. It is based on the Kubernetes platform and is widely used by developers and IT operations worldwide. The OpenShift Container platform makes use of CodeReady Containers. CodeReady Containers are pre-configured containers that can be used for developing and testing purposes. There are also CodeReady Workspaces, these workspaces are used to provide any member of the development or IT team with a consistent, secure, and zero-configuration development environment.
The OpenShift Container Platform is widely used because it helps the programmers and developers make their application faster because of CodeReady Containers and CodeReady Workspaces and it also allows them to test their application in the same environment. One of the advantages provided by OpenShift is the efficient container orchestration. This allows for faster container provisioning, deploying and management. It does this by streamlining and automating the automation process.
What knowledge is required or recommended to proceed with the installation?
To be able to follow this manual some knowledge is mandatory, because most of the commands are done within the Command Line interface it is necessary to know how it works and how you can browse through files/folders. If you either don’t have this basic knowledge or have trouble with the basic Command Line Interface commands from PowerShell, then a cheat sheet might offer some help. We recommend the following cheat sheet for windows:
Https://www.sans.org/security-resources/sec560/windows\_command\_line\_sheet\_v1.pdf
Another option is to read through the operating system’s documentation or introduction guides. Though the documentation can be overwhelming by the sheer amount of commands.
Microsoft: https://docs.microsoft.com/en-us/windows-serveadministration/windows-commands/windows-commands
MacOS
Https://www.makeuseof.com/tag/mac-terminal-commands-cheat-sheet/
Linux
https://ubuntu.com/tutorials/command-line-for-beginners#2-a-brief-history-lesson https://www.guru99.com/linux-commands-cheat-sheet.html
http://cc.iiti.ac.in/docs/linuxcommands.pdf
Aside from the required knowledge there are also some things that can be helpful to know just to make the use of OpenShift a bit simpler. This consists of some general knowledge on PaaS like Dockers and Kubernetes.
Docker https://www.docker.com/
Kubernetes https://kubernetes.io/

System requirements

Minimum System requirements

The minimum system requirements for the Red Hat OpenShift CodeReady Containers has the following minimum hardware:
Hardware requirements
Code Ready Containers requires the following system resources:
● 4 virtual CPU’s
● 9 GB of free random-access memory
● 35 GB of storage space
● Physical CPU with Hyper-V (intel) or SVM mode (AMD) this has to be enabled in the bios
Software requirements
The minimum system requirements for the Red Hat OpenShift CodeReady Containers has the following minimum operating system requirements:
Microsoft Windows
On Microsoft Windows, the Red Hat OpenShift CodeReady Containers requires the Windows 10 Pro Fall Creators Update (version 1709) or newer. CodeReady Containers does not work on earlier versions or other editions of Microsoft Windows. Microsoft Windows 10 Home Edition is not supported.
macOS
On macOS, the Red Hat OpenShift CodeReady Containers requires macOS 10.12 Sierra or newer.
Linux
On Linux, the Red Hat OpenShift CodeReady Containers is only supported on Red Hat Enterprise Linux/CentOS 7.5 or newer and on the latest two stable Fedora releases.
When using Red Hat Enterprise Linux, the machine running CodeReady Containers must be registered with the Red Hat Customer Portal.
Ubuntu 18.04 LTS or newer and Debian 10 or newer are not officially supported and may require manual set up of the host machine.

Required additional software packages for Linux

The CodeReady Containers on Linux require the libvirt and Network Manager packages to run. Consult the following table to find the command used to install these packages for your Linux distribution:
Table 1.1 Package installation commands by distribution
Linux Distribution Installation command
Fedora Sudo dnf install NetworkManager
Red Hat Enterprise Linux/CentOS Su -c 'yum install NetworkManager'
Debian/Ubuntu Sudo apt install qemu-kvm libvirt-daemonlibvirt-daemon-system network-manage

Installation

Getting started with the installation

To install CodeReady Containers a few steps must be undertaken. Because an OpenShift account is necessary to use the application this will be the first step. An account can be made on “https://www.openshift.com/”, where you need to press login and after that select the option “Create one now”
After making an account the next step is to download the latest release of CodeReady Containers and the pulled secret on “https://cloud.redhat.com/openshift/install/crc/installer-provisioned”. Make sure to download the version corresponding to your platform and/or operating system. After downloading the right version, the contents have to be extracted from the archive to a location in your $PATH. The pulled secret should be saved because it is needed later.
The command line interface has to be opened before we can continue with the installation. For windows we will use PowerShell. All the commands we use during the installation procedure of this guide are going to be done in this command line interface unless stated otherwise. To be able to run the commands within the command line interface, use the command line interface to go to the location in your $PATH where you extracted the CodeReady zip.
If you have installed an outdated version and you wish to update, then you can delete the existing CodeReady Containers virtual machine with the $crc delete command. After deleting the container, you must replace the old crc binary with a newly downloaded binary of the latest release.
C:\Users\[username]\$PATH>crc delete 
When you have done the previous steps please confirm that the correct and up to date crc binary is in use by checking it with the $crc version command, this should provide you with the version that is currently installed.
C:\Users\[username]\$PATH>crc version 
To set up the host operating system for the CodeReady Containers virtual machine you have to run the $crc setup command. After running crc setup, crc start will create a minimal OpenShift 4 cluster in the folder where the executable is located.
C:\Users\[username]>crc setup 

Setting up CodeReady Containers

Now we need to set up the new CodeReady Containers release with the $crc setup command. This command will perform the operations necessary to run the CodeReady Containers and create the ~/.crc directory if it did not previously exist. In the process you have to supply your pulled secret, once this process is completed you have to reboot your system. When the system has restarted you can start the new CodeReady Containers virtual machine with the $crc start command. The $crc start command starts the CodeReady virtual machine and OpenShift cluster.
You cannot change the configuration of an existing CodeReady Containers virtual machine. So if you have a CodeReady Containers virtual machine and you want to make configuration changes you need to delete the virtual machine with the $crc delete command and create a new virtual machine and start that one with the configuration changes. Take note that deleting the virtual machine will also delete the data stored in the CodeReady Containers. So, to prevent data loss we recommend you save the data you wish to keep. Also keep in mind that it is not necessary to change the default configuration to start OpenShift.
C:\Users\[username]\$PATH>crc setup 
Before starting the machine, you need to keep in mind that it is not possible to make any changes to the virtual machine. For this tutorial however it is not necessary to change the configuration, if you don’t want to make any changes please continue by starting the machine with the crc start command.
C:\Users\[username]\$PATH>crc start 
\ it is possible that you will get a Nameserver error later on, if this is the case please start it with* crc start -n 1.1.1.1

Configuration

It is not is not necessary to change the default configuration and continue with this tutorial, this chapter is here for those that wish to do so and know what they are doing. However, for MacOS and Linux it is necessary to change the dns settings.

Configuring the CodeReady Containers

To start the configuration of the CodeReady Containers use the command crc config. This command allows you to configure the crc binary and the CodeReady virtual machine. The command has some requirements before it’s able to configure. This requirement is a subcommand, the available subcommands for this binary and virtual machine are:
get, this command allows you to see the values of a configurable property
set/unset, this command can be used for 2 things. To display the names of, or to set and/or unset values of several options and parameters. These parameters being:
○ Shell options
○ Shell attributes
○ Positional parameters
view, this command starts the configuration in read-only mode.
These commands need to operate on named configurable properties. To list all the available properties, you can run the command $crc config --help.
Throughout this manual we will use the $crc config command a few times to change some properties needed for the configuration.
There is also the possibility to use the crc config command to configure the behavior of the checks that’s done by the $crc start end $crc setup commands. By default, the startup checks will stop with the process if their conditions are not met. To bypass this potential issue, you can set the value of a property that starts with skip-check or warn-check to true to skip the check or warning instead of ending up with an error.
C:\Users\[username]\$PATH>crc config get C:\Users\[username]\$PATH>crc config set C:\Users\[username]\$PATH>crc config unset C:\Users\[username]\$PATH>crc config view C:\Users\[username]\$PATH>crc config --help 

Configuring the Virtual Machine

You can use the CPUs and memory properties to configure the default number of vCPU’s and amount of memory available for the virtual machine.
To increase the number of vCPU’s available to the virtual machine use the $crc config set CPUs . Keep in mind that the default number for the CPU’s is 4 and the number of vCPU’s you wish to assign must be equal or greater than the default value.
To increase the memory available to the virtual machine, use the $crc config set memory . Keep in mind that the default number for the memory is 9216 Mebibytes and the amount of memory you wish to assign must be equal or greater than the default value.
C:\Users\[username]\$PATH>crc config set CPUs  C:\Users\[username]\$PATH>crc config set memory > 

Configuring the DNS

Window / General DNS setup

There are two domain names used by the OpenShift cluster that are managed by the CodeReady Containers, these are:
crc.testing, this is the domain for the core OpenShift services.
apps-crc.testing, this is the domain used for accessing OpenShift applications that are deployed on the cluster.
Configuring the DNS settings in Windows is done by executing the crc setup. This command automatically adjusts the DNS configuration on the system. When executing crc start additional checks to verify the configuration will be executed.

macOS DNS setup

MacOS expects the following DNS configuration for the CodeReady Containers
● The CodeReady Containers creates a file that instructs the macOS to forward all DNS requests for the testing domain to the CodeReady Containers virtual machine. This file is created at /etc/resolvetesting.
● The oc binary requires the following CodeReady Containers entry to function properly, api.crc.testing adds an entry to /etc/hosts pointing at the VM IPaddress.

Linux DNS setup

CodeReady containers expect a slightly different DNS configuration. CodeReady Container expects the NetworkManager to manage networking. On Linux the NetworkManager uses dnsmasq through a configuration file, namely /etc/NetworkManageconf.d/crc-nm-dnsmasq.conf.
To set it up properly the dnsmasq instance has to forward the requests for crc.testing and apps-crc.testing domains to “192.168.130.11”. In the /etc/NetworkManageconf.d/crc-nm-dnsmasq.conf this will look like the following:
● Server=/crc. Testing/192.168.130.11
● Server=/apps-crc. Testing/192.168.130.11

Accessing the Openshift Cluster

Accessing the Openshift web console

To gain access to the OpenShift cluster running in the CodeReady virtual machine you need to make sure that the virtual machine is running before continuing with this chapter. The OpenShift clusters can be accessed through the OpenShift web console or the client binary(oc).
First you need to execute the $crc console command, this command will open your web browser and direct a tab to the web console. After that, you need to select the htpasswd_provider option in the OpenShift web console and log in as a developer user with the output provided by the crc start command.
It is also possible to view the password for kubeadmin and developer users by running the $crc console --credentials command. While you can access the cluster through the kubeadmin and developer users, it should be noted that the kubeadmin user should only be used for administrative tasks such as user management and the developer user for creating projects or OpenShift applications and the deployment of these applications.
C:\Users\[username]\$PATH>crc console C:\Users\[username]\$PATH>crc console --credentials 

Accessing the OpenShift cluster with oc

To gain access to the OpenShift cluster with the use of the oc command you need to complete several steps.
Step 1.
Execute the $crc oc-env command to print the command needed to add the cached oc binary to your PATH:
C:\Users\[username]\$PATH>crc oc-env 
Step 2.
Execute the printed command. The output will look something like the following:
PS C:\Users\OpenShift> crc oc-env $Env:PATH = "CC:\Users\OpenShift\.crc\bin\oc;$Env:PATH" # Run this command to configure your shell: # & crc oc-env | Invoke-Expression 
This means we have to execute* the command that the output gives us, in this case that is:
C:\Users\[username]\$PATH>crc oc-env | Invoke-Expression 
\this has to be executed every time you start; a solution is to move the oc binary to the same path as the crc binary*
To test if this step went correctly execute the following command, if it returns without errors oc is set up properly
C:\Users\[username]\$PATH>.\oc 
Step 3
Now you need to login as a developer user, this can be done using the following command:
$oc login -u developer https://api.crc.testing:6443
Keep in mind that the $crc start will provide you with the password that is needed to login with the developer user.
C:\Users\[username]\$PATH>oc login -u developer https://api.crc.testing:6443 
Step 4
The oc can now be used to interact with your OpenShift cluster. If you for instance want to verify if the OpenShift cluster Operators are available, you can execute the command
$oc get co 
Keep in mind that by default the CodeReady Containers disables the functions provided by the commands $machine-config and $monitoringOperators.
C:\Users\[username]\$PATH>oc get co 

Demonstration

Now that you are able to access the cluster, we will take you on a tour through some of the possibilities within OpenShift Container Platform.
We will start by creating a project. Within this project we will import an image, and with this image we are going to build an application. After building the application we will explain how upscaling and downscaling can be used within the created application.
As the next step we will show the user how to make changes in the network route. We also show how monitoring can be used within the platform, however within the current version of CodeReady Containers this has been disabled.
Lastly, we will show the user how to use user management within the platform.

Creating a project

To be able to create a project within the console you have to login on the cluster. If you have not yet done this, this can be done by running the command crc console in the command line and logging in with the login data from before.
When you are logged in as admin, switch to Developer. If you're logged in as a developer, you don't have to switch. Switching between users can be done with the dropdown menu top left.
Now that you are properly logged in press the dropdown menu shown in the image below, from there click on create a project.
https://preview.redd.it/ytax8qocitv51.png?width=658&format=png&auto=webp&s=72d143733f545cf8731a3cca7cafa58c6507ace2
When you press the correct button, the following image will pop up. Here you can give your project a name and description. We chose to name it CodeReady with a displayname CodeReady Container.
https://preview.redd.it/vtaxadwditv51.png?width=594&format=png&auto=webp&s=e3b004bab39fb3b732d96198ed55fdd99259f210

Importing image

The Containers in OpenShift Container Platform are based on OCI or Docker formatted images. An image is a binary that contains everything needed to run a container as well as the metadata of the requirements needed for the container.
Within the OpenShift Container Platform it’s possible to obtain images in a number of ways. There is an integrated Docker registry that offers the possibility to download new images “on the fly”. In addition, OpenShift Container Platform can use third party registries such as:
- Https://hub.docker.com/
- Https://catalog.redhat.com/software/containers/search
Within this manual we are going to import an image from the Red Hat container catalog. In this example we’ll be using MediaWiki.
Search for the application in https://catalog.redhat.com/software/containers/search

https://preview.redd.it/c4mrbs0fitv51.png?width=672&format=png&auto=webp&s=f708f0542b53a9abf779be2d91d89cf09e9d2895
Navigate to “Get this image”
Follow the steps to “create a registry service account”, after that you can copy the YAML.
https://preview.redd.it/b4rrklqfitv51.png?width=1323&format=png&auto=webp&s=7a2eb14a3a1ba273b166e03e1410f06fd9ee1968
After the YAML has been copied we will go to the topology view and click on the YAML button
https://preview.redd.it/k3qzu8dgitv51.png?width=869&format=png&auto=webp&s=b1fefec67703d0a905b00765f0047fe7c6c0735b
Then we have to paste in the YAML, put in the name, namespace and your pull secret name (which you created through your registry account) and click on create.
https://preview.redd.it/iz48kltgitv51.png?width=781&format=png&auto=webp&s=4effc12e07bd294f64a326928804d9a931e4d2bd
Run the import command within powershell
$oc import-image openshift4/mediawiki --from=registry.redhat.io/openshift4/mediawiki --confirm imagestream.image.openshift.io/mediawiki imported 

Creating and managing an application

There are a few ways to create and manage applications. Within this demonstration we’ll show how to create an application from the previously imported image.

Creating the application

To create an image with the previously imported image go back to the console and topology. From here on select container image.
https://preview.redd.it/6506ea4iitv51.png?width=869&format=png&auto=webp&s=c0231d70bb16c76cd131e6b71256e93550cc8b37
For the option image you'll want to select the “image stream tag from internal registry” option. Give the application a name and then create the deployment.
https://preview.redd.it/tk72idniitv51.png?width=813&format=png&auto=webp&s=a4e662cf7b96604d84df9d04ab9b90b5436c803c
If everything went right during the creating process you should see the following, this means that the application is successfully running.
https://preview.redd.it/ovv9l85jitv51.png?width=901&format=png&auto=webp&s=f78f350207add0b8a979b6da931ff29ffa30128c

Scaling the application

In OpenShift there is a feature called autoscaling. There are two types of application scaling, namely vertical scaling, and horizontal scaling. Vertical scaling is adding only more CPU and hard disk and is no longer supported by OpenShift. Horizontal scaling is increasing the number of machines.
One of the ways to scale an application is by increasing the number of pods. This can be done by going to a pod within the view as seen in the previous step. By either pressing the up or down arrow more pods of the same application can be added. This is similar to horizontal scaling and can result in better performance when there are a lot of active users at the same time.
https://preview.redd.it/s6i1vbcrltv51.png?width=602&format=png&auto=webp&s=e62cbeeed116ba8c55704d61a990fc0d8f3cfaa1
In the picture above we see the number of nodes and pods and how many resources those nodes and pods are using. This is something to keep in mind if you want to scale up your application, the more you scale it up, the more resources it will take up.

https://preview.redd.it/quh037wmitv51.png?width=194&format=png&auto=webp&s=5e326647b223f3918c259b1602afa1b5fbbeea94

Network

Since OpenShift Container platform is built on Kubernetes it might be interesting to know some theory about its networking. Kubernetes, on which the OpenShift Container platform is built, ensures that the Pods within OpenShift can communicate with each other via the network and assigns them their own IP address. This makes all containers within the Pod behave as if they were on the same host. By giving each pod its own IP address, pods can be treated as physical hosts or virtual machines in terms of port mapping, networking, naming, service discovery, load balancing, application configuration and migration. To run multiple services such as front-end and back-end services, OpenShift Container Platform has a built-in DNS.
One of the changes that can be made to the networking of a Pod is the Route. We’ll show you how this can be done in this demonstration.
The Route is not the only thing that can be changed and or configured. Two other options that might be interesting but will not be demonstrated in this manual are:
- Ingress controller, Within OpenShift it is possible to set your own certificate. A user must have a certificate / key pair in PEM-encoded files, with the certificate signed by a trusted authority.
- Network policies, by default all pods in a project are accessible from other pods and network locations. To isolate one or more pods in a project, it is possible to create Network Policy objects in that project to indicate the allowed incoming connections. Project administrators can create and delete Network Policy objects within their own project.
There is a search function within the Container Platform. We’ll use this to search for the network routes and show how to add a new route.
https://preview.redd.it/8jkyhk8pitv51.png?width=769&format=png&auto=webp&s=9a8762df5bbae3d8a7c92db96b8cb70605a3d6da
You can add items that you use a lot to the navigation
https://preview.redd.it/t32sownqitv51.png?width=1598&format=png&auto=webp&s=6aab6f17bc9f871c591173493722eeae585a9232
For this example, we will add Routes to navigation.
https://preview.redd.it/pm3j7ljritv51.png?width=291&format=png&auto=webp&s=bc6fbda061afdd0780bbc72555d809b84a130b5b
Now that we’ve added Routes to the navigation, we can start the creation of the Route by clicking on “Create route”.
https://preview.redd.it/5lgecq0titv51.png?width=1603&format=png&auto=webp&s=d548789daaa6a8c7312a419393795b52da0e9f75
Fill in the name, select the service and the target port from the drop-down menu and click on Create.
https://preview.redd.it/qczgjc2uitv51.png?width=778&format=png&auto=webp&s=563f73f0dc548e3b5b2319ca97339e8f7b06c9d6
As you can see, we’ve successfully added the new route to our application.
https://preview.redd.it/gxfanp2vitv51.png?width=1588&format=png&auto=webp&s=1aae813d7ad0025f91013d884fcf62c5e7d109f1
Storage
OpenShift makes use of Persistent Storage, this type of storage uses persistent volume claims(PVC). PVC’s allow the developer to make persistent volumes without needing any knowledge about the underlying infrastructure.
Within this storage there are a few configuration options:
It is however important to know how to manually reclaim the persistent volumes, since if you delete PV the associated data will not be automatically deleted with it and therefore you cannot reassign the storage to another PV yet.
To manually reclaim the PV, you need to follow the following steps:
Step 1: Delete the PV, this can be done by executing the following command
$oc delete  
Step 2: Now you need to clean up the data on the associated storage asset
Step 3: Now you can delete the associated storage asset or if you with to reuse the same storage asset you can now create a PV with the storage asset definition.
It is also possible to directly change the reclaim policy within OpenShift, to do this you would need to follow the following steps:
Step 1: Get a list of the PVs in your cluster
$oc get pv 
This will give you a list of all the PV’s in your cluster and will display their following attributes: Name, Capacity, Accesmodes, Reclaimpolicy, Statusclaim, Storageclass, Reason and Age.
Step 2: Now choose the PV you wish to change and execute one of the following command’s, depending on your preferred policy:
$oc patch pv  -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}' 
In this example the reclaim policy will be changed to Retain.
$oc patch pv  -p '{"spec":{"persistentVolumeReclaimPolicy":"Recycle"}}' 
In this example the reclaim policy will be changed to Recycle.
$oc patch pv  -p '{"spec":{"persistentVolumeReclaimPolicy":"Delete"}}' 
In this example the reclaim policy will be changed to Delete.

Step 3: After this you can check the PV to verify the change by executing this command again:
$oc get pv 

Monitoring

Within Red Hat OpenShift there is the possibility to monitor the data that has been created by your containers, applications, and pods. To do so, click on the menu option in the top left corner. Check if you are logged in as Developer and click on “Monitoring”. Normally this function is not activated within the CodeReady containers, because it uses a lot of resources (Ram and CPU) to run.
https://preview.redd.it/an0wvn6zitv51.png?width=228&format=png&auto=webp&s=51abf8cc31bd763deb457d49514f99ee81d610ec
Once you have activated “Monitoring” you can change the “Time Range” and “Refresh Interval” in the top right corner of your screen. This will change the monitoring data on your screen.
https://preview.redd.it/e0yvzsh1jtv51.png?width=493&format=png&auto=webp&s=b2c563635cfa60ea7ce2f9c146aa994df6aa1c34
Within this function you can also monitor “Events”. These events are records of important information and are useful for monitoring and troubleshooting within the OpenShift Container Platform.
https://preview.redd.it/l90vkmp3jtv51.png?width=602&format=png&auto=webp&s=4e97f14bedaec7ededcdcda96e7823f77ced24c2

User management

According to the documentation of OpenShift is a user, an entity that interacts with the OpenShift Container Platform API. These can be a developer for developing applications or an administrator for managing the cluster. Users can be assigned to groups, which set the permissions applied to all the group’s members. For example, you can give API access to a group, which gives all members of the group API access.
There are multiple ways to create a user depending on the configured identity provider. The DenyAll identity provider is the default within OpenShift Container Platform. This default denies access for all the usernames and passwords.
First, we’re going to create a new user, the way this is done depends on the identity provider, this depends on the mapping method used as part of the identity provider configuration.
for more information on what mapping methods are and how they function:
https://docs.openshift.com/enterprise/3.1/install_config/configuring_authentication.html
With the default mapping method, the steps will be as following
$oc create user  
Next up, we’ll create an OpenShift Container Platform Identity. Use the name of the identity provider and the name that uniquely represents this identity in the scope of the identity provider:
$oc create identity : 
The is the name of the identity provider in the master configuration. For example, the following commands create an Identity with identity provider ldap_provider and the identity provider username mediawiki_s.
$oc create identity ldap_provider:mediawiki_s 
Create a useidentity mapping for the created user and identity:
$oc create useridentitymapping :  
For example, the following command maps the identity to the user:
$oc create useridentitymapping ldap_provider:mediawiki_s mediawiki 
Now were going to assign a role to this new user, this can be done by executing the following command:
$oc create clusterrolebinding  \ --clusterrole= --user= 
There is a --clusterrole option that can be used to give the user a specific role, like a cluster user with admin privileges. The cluster admin has access to all files and is able to manage the access level of other users.
Below is an example of the admin clusterrole command:
$oc create clusterrolebinding registry-controller \ --clusterrole=cluster-admin --user=admin 

What did you achieve?

If you followed all the steps within this manual you now should have a functioning Mediawiki Application running on your own CodeReady Containers. During the installation of this application on CodeReady Containers you have learned how to do the following things:
● Installing the CodeReady Containers
● Updating OpenShift
● Configuring a CodeReady Container
● Configuring the DNS
● Accessing the OpenShift cluster
● Deploying an application
● Creating new users
With these skills you’ll be able to set up your own Container Platform environment and host applications of your choosing.

Troubleshooting

Nameserver
There is the possibility that your CodeReady container can't connect to the internet due to a Nameserver error. When this is encountered a working fix for us was to stop the machine and then start the CRC machine with the following command:
C:\Users\[username]\$PATH>crc start -n 1.1.1.1 
Hyper-V admin
Should you run into a problem with Hyper-V it might be because your user is not an admin and therefore can’t access the Hyper-V admin user group.
  1. Click Start > Control Panel > Administration Tools > Computer Management. The Computer Management window opens.
  2. Click System Tools > Local Users and Groups > Groups. The list of groups opens.
  3. Double-click the Hyper-V Administrators group. The Hyper-V Administrators Properties window opens.
  4. Click Add. The Select Users or Groups window opens.
  5. In the Enter the object names to select field, enter the user account name to whom you want to assign permissions, and then click OK.
  6. Click Apply, and then click OK.

Terms and definitions

These terms and definitions will be expanded upon, below you can see an example of how this is going to look like together with a few terms that will require definitions.
Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. Openshift is based on Kubernetes.
Clusters are a collection of multiple nodes which communicate with each other to perform a set of operations.
Containers are the basic units of OpenShift applications. These container technologies are lightweight mechanisms for isolating running processes so that they are limited to interacting with only their designated resources.
CodeReady Container is a minimal, preconfigured cluster that is used for development and testing purposes.
CodeReady Workspaces uses Kubernetes and containers to provide any member of the development or IT team with a consistent, secure, and zero-configuration development environment.

Sources

  1. https://www.ibm.com/support/knowledgecenteen/SSMKFH/com.ibm.apmaas.doc/install/hyperv_config_add_nonadmin_user_hyperv_usergroup.html
  2. https://access.redhat.com/documentation/en-us/openshift_container_platform/4.5/
  3. https://docs.openshift.com/container-platform/3.11/admin_guide/manage_users.html
submitted by Groep6HHS to openshift [link] [comments]

How to name sync & async items?

How should I organize parallel sets of synchronous and asynchronous modules, structs, and functions?
  1. This doesn't compile:
    pub mod async; // keyword, no good pub mod sync; 
    I considered async_ and r#async but don't want to get punched.
  2. sync in std::sync means "synchronization" not "synchronous" so maybe that's not the best?
  3. Should I make default methods synchronous and add a suffix for async ones: open() and open_async()? (Async is the cool stuff, I don't like giving it the crappier name...)
  4. I've been suggested to make the async code the default and hide the sync stuff in a module.
    async fn open() -> io::Result; mod blocking { fn open() -> io::Result; } 
Other ideas? Are there any popular libraries that do both sync and async?
submitted by jkugelman to rust [link] [comments]

Node.js Application Monitoring with Prometheus and Grafana

Hi guys, we published this article on our blog (here) some time ago and I thought it could be interesting for node to read is as well, since we got some good feedback on it!

What is application monitoring and why is it necessary?

Application monitoring is a method that uses software tools to gain insights into your software deployments. This can be achieved by simple health checks to see if the server is available to more advanced setups where a monitoring library is integrated into your server that sends data to a dedicated monitoring service. It can even involve the client side of your application, offering more detailed insights into the user experience.
For every developer, monitoring should be a crucial part of the daily work, because you need to know how the software behaves in production. You can let your testers work with your system and try to mock interactions or high loads, but these techniques will never be the same as the real production workload.

What is Prometheus and how does it work?

Prometheus is an open-source monitoring system that was created in 2012 by Soundcloud. In 2016, Prometheus became the second project (following Kubernetes) to be hosted by the Cloud Native Computing Foundation.
https://preview.redd.it/8kshgh0qpor51.png?width=1460&format=png&auto=webp&s=455c37b1b1b168d732e391a882598e165c42501a
The Prometheus server collects metrics from your servers and other monitoring targets by pulling their metric endpoints over HTTP at a predefined time interval. For ephemeral and batch jobs, for which metrics can't be scraped periodically due to their short-lived nature, Prometheus offers a Pushgateway. This is an intermediate server that monitoring targets can push their metrics before exiting. The data is retained there until the Prometheus server pulls it later.
The core data structure of Prometheus is the time series, which is essentially a list of timestamped values that are grouped by metric.
With PromQL (Prometheus Query Language), Prometheus provides a functional query language allowing for selection and aggregation of time series data in real-time. The result of a query can be viewed directly in the Prometheus web UI, or consumed by external systems such as Grafana via the HTTP API.

How to use prom-client to export metrics in Node.js for Prometheus?

prom-client is the most popular Prometheus client library for Node.js. It provides the building blocks to export metrics to Prometheus via the pull and push methods and supports all Prometheus metric types such as histogram, summaries, gauges and counters.

Setup sample Node.js project

Create a new directory and set up the Node.js project:
$ mkdir example-nodejs-app $ cd example-nodejs-app $ npm init -y 

Install prom-client

The prom-client npm module can be installed via:
$ npm install prom-client 

Exposing default metrics

Every Prometheus client library comes with predefined default metrics that are assumed to be good for all applications on the specific runtime. The prom-client library also follows this convention. The default metrics are useful for monitoring the usage of resources such as memory and CPU.
You can capture and expose the default metrics with following code snippet:
const http = require('http') const url = require('url') const client = require('prom-client') // Create a Registry which registers the metrics const register = new client.Registry() // Add a default label which is added to all metrics register.setDefaultLabels({ app: 'example-nodejs-app' }) // Enable the collection of default metrics client.collectDefaultMetrics({ register }) // Define the HTTP server const server = http.createServer(async (req, res) => { // Retrieve route from request object const route = url.parse(req.url).pathname if (route === '/metrics') { // Return all metrics the Prometheus exposition format res.setHeader('Content-Type', register.contentType) res.end(register.metrics()) } }) // Start the HTTP server which exposes the metrics on http://localhost:8080/metrics server.listen(8080) 

Exposing custom metrics

While default metrics are a good starting point, at some point, you’ll need to define custom metrics in order to stay on top of things.
Capturing and exposing a custom metric for HTTP request durations might look like this:
const http = require('http') const url = require('url') const client = require('prom-client') // Create a Registry which registers the metrics const register = new client.Registry() // Add a default label which is added to all metrics register.setDefaultLabels({ app: 'example-nodejs-app' }) // Enable the collection of default metrics client.collectDefaultMetrics({ register }) // Create a histogram metric const httpRequestDurationMicroseconds = new client.Histogram({ name: 'http_request_duration_seconds', help: 'Duration of HTTP requests in microseconds', labelNames: ['method', 'route', 'code'], buckets: [0.1, 0.3, 0.5, 0.7, 1, 3, 5, 7, 10] }) // Register the histogram register.registerMetric(httpRequestDurationMicroseconds) // Define the HTTP server const server = http.createServer(async (req, res) => { // Start the timer const end = httpRequestDurationMicroseconds.startTimer() // Retrieve route from request object const route = url.parse(req.url).pathname if (route === '/metrics') { // Return all metrics the Prometheus exposition format res.setHeader('Content-Type', register.contentType) res.end(register.metrics()) } // End timer and add labels end({ route, code: res.statusCode, method: req.method }) }) // Start the HTTP server which exposes the metrics on http://localhost:8080/metrics server.listen(8080) 
Copy the above code into a file called server.jsand start the Node.js HTTP server with following command:
$ node server.js 
You should now be able to access the metrics via http://localhost:8080/metrics.

How to scrape metrics from Prometheus

Prometheus is available as Docker image and can be configured via a YAML file.
Create a configuration file called prometheus.ymlwith following content:
global: scrape_interval: 5s scrape_configs: - job_name: "example-nodejs-app" static_configs: - targets: ["docker.for.mac.host.internal:8080"] 
The config file tells Prometheus to scrape all targets every 5 seconds. The targets are defined under scrape_configs. On Mac, you need to use docker.for.mac.host.internal as host, so that the Prometheus Docker container can scrape the metrics of the local Node.js HTTP server. On Windows, use docker.for.win.localhost and for Linux use localhost.
Use the docker run command to start the Prometheus Docker container and mount the configuration file (prometheus.yml):
$ docker run --rm -p 9090:9090 \ -v `pwd`/prometheus.yml:/etc/prometheus/prometheus.yml \ prom/prometheus:v2.20.1 
Windows users need to replace pwd with the path to their current working directory.
You should now be able to access the Prometheus Web UI on http://localhost:9090

What is Grafana and how does it work?

Grafana is a web application that allows you to visualize data sources via graphs or charts. It comes with a variety of chart types, allowing you to choose whatever fits your monitoring data needs. Multiple charts are grouped into dashboards in Grafana, so that multiple metrics can be viewed at once.
https://preview.redd.it/vt8jwu8vpor51.png?width=3584&format=png&auto=webp&s=4101843c84cfc6293debcdfc3bdbe70811dab2e9
The metrics displayed in the Grafana charts come from data sources. Prometheus is one of the supported data sources for Grafana, but it can also use other systems, like AWS CloudWatch, or Azure Monitor.
Grafana also allows you to define alerts that will be triggered if certain issues arise, meaning you’ll receive an email notification if something goes wrong. For a more advanced alerting setup checkout the Grafana integration for Opsgenie.

Starting Grafana

Grafana is also available as Docker container. Grafana datasources can be configured via a configuration file.
Create a configuration file called datasources.ymlwith the following content:
apiVersion: 1 datasources: - name: Prometheus type: prometheus access: proxy orgId: 1 url: http://docker.for.mac.host.internal:9090 basicAuth: false isDefault: true editable: true 
The configuration file specifies Prometheus as a datasource for Grafana. Please note that on Mac, we need to use docker.for.mac.host.internal as host, so that Grafana can access Prometheus. On Windows, use docker.for.win.localhost and for Linux use localhost.
Use the following command to start a Grafana Docker container and to mount the configuration file of the datasources (datasources.yml). We also pass some environment variables to disable the login form and to allow anonymous access to Grafana:
$ docker run --rm -p 3000:3000 \ -e GF_AUTH_DISABLE_LOGIN_FORM=true \ -e GF_AUTH_ANONYMOUS_ENABLED=true \ -e GF_AUTH_ANONYMOUS_ORG_ROLE=Admin \ -v `pwd`/datasources.yml:/etc/grafana/provisioning/datasources/datasources.yml \ grafana/grafana:7.1.5 
Windows users need to replace pwd with the path to their current working directory.
You should now be able to access the Grafana Web UI on http://localhost:3000

Configuring a Grafana Dashboard

Once the metrics are available in Prometheus, we want to view them in Grafana. This requires creating a dashboard and adding panels to that dashboard:
  1. Go to the Grafana UI at http://localhost:3000, click the + button on the left, and select Dashboard.
  2. In the new dashboard, click on the Add new panel button.
  3. In the Edit panel view, you can select a metric and configure a chart for it.
  4. The Metrics drop-down on the bottom left allows you to choose from the available metrics. Let’s use one of the default metrics for this example.
  5. Type process_resident_memory_bytesinto the Metricsinput and {{app}}into the Legendinput.
  6. On the right panel, enter Memory Usage for the Panel title.
  7. As the unit of the metric is in bytes we need to select bytes(Metric)for the left y-axis in the Axes section, so that the chart is easy to read for humans.
You should now see a chart showing the memory usage of the Node.js HTTP server.
Press Apply to save the panel. Back on the dashboard, click the small "save" symbol at the top right, a pop-up will appear allowing you to save your newly created dashboard for later use.

Setting up alerts in Grafana

Since nobody wants to sit in front of Grafana all day watching and waiting to see if things go wrong, Grafana allows you to define alerts. These alerts regularly check whether a metric adheres to a specific rule, for example, whether the errors per second have exceeded a specific value.
Alerts can be set up for every panel in your dashboards.
  1. Go into the Grafana dashboard we just created.
  2. Click on a panel title and select edit.
  3. Once in the edit view, select "Alerts" from the middle tabs, and press the Create Alertbutton.
  4. In the Conditions section specify 42000000 after IS ABOVE. This tells Grafana to trigger an alert when the Node.js HTTP server consumes more than 42 MB Memory.
  5. Save the alert by pressing the Apply button in the top right.

Sample code repository

We created a code repository that contains a collection of Docker containers with Prometheus, Grafana, and a Node.js sample application. It also contains a Grafana dashboard, which follows the RED monitoring methodology.
Clone the repository:
$ git clone https://github.com/coder-society/nodejs-application-monitoring-with-prometheus-and-grafana.git 
The JavaScript code of the Node.js app is located in the /example-nodejs-app directory. All containers can be started conveniently with docker-compose. Run the following command in the project root directory:
$ docker-compose up -d 
After executing the command, a Node.js app, Grafana, and Prometheus will be running in the background. The charts of the gathered metrics can be accessed and viewed via the Grafana UI at http://localhost:3000/d/1DYaynomMk/example-service-dashboard.
To generate traffic for the Node.js app, we will use the ApacheBench command line tool, which allows sending requests from the command line.
On MacOS, it comes pre-installed by default. On Debian-based Linux distributions, ApacheBench can be installed with the following command:
$ apt-get install apache2-utils 
For Windows, you can download the binaries from Apache Lounge as a ZIP archive. ApacheBench will be named ab.exe in that archive.
This CLI command will run ApacheBench so that it sends 10,000 requests to the /order endpoint of the Node.js app:
$ ab -m POST -n 10000 -c 100 http://localhost:8080/order 
Depending on your hardware, running this command may take some time.
After running the ab command, you can access the Grafana dashboard via http://localhost:3000/d/1DYaynomMk/example-service-dashboard.

Summary

Prometheus is a powerful open-source tool for self-hosted monitoring. It’s a good option for cases in which you don’t want to build from scratch but also don’t want to invest in a SaaS solution.
With a community-supported client library for Node.js and numerous client libraries for other languages, the monitoring of all your systems can be bundled into one place.
Its integration is straightforward, involving just a few lines of code. It can be done directly for long-running services or with help of a push server for short-lived jobs and FaaS-based implementations.
Grafana is also an open-source tool that integrates well with Prometheus. Among the many benefits it offers are flexible configuration, dashboards that allow you to visualize any relevant metric, and alerts to notify of any anomalous behavior.
These two tools combined offer a straightforward way to get insights into your systems. Prometheus offers huge flexibility in terms of metrics gathered and Grafana offers many different graphs to display these metrics. Prometheus and Grafana also integrate so well with each other that it’s surprising they’re not part of one product.
You should now have a good understanding of Prometheus and Grafana and how to make use of them to monitor your Node.js projects in order to gain more insights and confidence in your software deployments.
submitted by matthevva to node [link] [comments]

Windows file system driver filter driver

I am looking for any info on a feature/regkey/public Microsoft API that will allow me to provide a list of bad filenames that should never be written to storage.
I need a hard-block for specific files that I consider security threat due to their ability to contain arbitrary code (even if executed by trusted system processes), even if the file creation is from an authorized account.
I need a way to prevent the the file-system from creating files with specific names in any location, so that any attempts by windows update, INF copy file, or msi installers (including invocation from privileged Administrator accounts or System's TrustedInstaller) from ever creating a file with specific name.
I am NOT looking for a mechanism to block execution of files already on the system, I need a way to prevent files containing arbitrary code from ever getting to the system in the future as they are coming from a 3rd party that is constantly figuring out some new deployment method to the local system.
At the moment, the only viable solution I have is running such a windows10 environment in a Virtual machine where the hypervisor can scan and roll back the storage image, but its so crude.
Edit: for spelling, and elaboration. The goal is to have a system on the network that does not trust "Domain Administrator" account when it comes to writing those specific files as well, the files should be updated only via physical access to the system to flip the setting/execute a proprietary binary that allows update to the files. Group policy of HVCI based execution does not seem like an option. The security posture of the system and it's network access being permitted is still subject to network admin's NAC
submitted by yourworldisallwrong to Windows10 [link] [comments]

I created an INFO AGGREGATOR for YouTube channels! Has sections for note taking, marking videos as watched, marking videos to rewatch, and a link to every video posted by that channel 🤓

I wanted to have an easy way to take notes on videos I watch on YouTube, and ended up making a Python package to automate the video scraping process for any channel. This package is specifically for scraping videos posted by one channel, and does not support scraping info from multiple channels or linking related videos.
Sources: GitHub, PyPI, releases
pip3 install -U yt-videos-list # MacOS/Linux pip install -U yt-videos-list # Windows python3 # MacOS/Linux python # Windows from yt_videos_list import ListCreator my_driver = 'firefox' # SUBSTITUTE DRIVER YOU WANT (opera, safari, chrome, brave, edge) lc = ListCreator(driver=my_driver, scroll_pause_time=0.8) lc.create_list_for(url='https://www.youtube.com/useschafer5') lc.create_list_for(url='https://www.youtube.com/channel/UC8butISFwT-Wl7EV0hUK0BQ') # see the new files that were just created: import os os.system('ls -lt | head') # MacOS/Linux os.system('dir /O-D | find "_videos_list"') # Windows # for more information on using the module: help(lc) 
For more info about the API and debugging common setup problems, see the API guide. There's also more configuration information and options about which driver to use there, so take a look if you want a better idea! :)

Background

This package uses Selenium with additional logic (in this submodule) to automatically download the relevant Selenium drivers for all browsers you already have. This was crucial since setting up Selenium is often a nightmare the first time (you need to configure path variables if you download it from one place but not another, or you need to move it after you download it, or you need to unzip it, etc...), so the added logic uses curl and tar to download the binaries directly and places the binaries in a location where you don't need to configure anything.
There are also tests here (see the run_tests.sh and run_tests.bat files for an overview) to ensure the output files are consistent every time and across platforms (using hashes to compare expected file to output file). This was initially a source of error since Windows uses CRLF line endings and *nix typically uses LF endings, so I thought this would require manual modification, but turns out it doesn't and this required a bit of tinkering to get right (this is what I incorrectly did the first time, and this is the fix after I realized the problem, and this is the additional configuration you need to do to synchronize output for csv files).
I also added a custom minifier to shrink the source code to save space. This takes all the code from the dev/ directory, strips whitespace and comments, then recreates it in the yt_videos_list/ directory. The goal was to create a minifier similar to the one used by front-end frameworks to shrink shipped code to minimize bandwidth usage. I realize this isn't something that's typically done in Python, but figured since most users just pip installthe package and rarely look at the actual source code, this could be something I could do without causing much problems. 🤓
All this said, is there anything else I can do to make this project better? Mainly looking for feedback on design choices and readability, since these 2 things tend to cause the most problems when working on a new codebase, but if any of you have other feedback I'd love to hear it!
submitted by __forever_curious to Python [link] [comments]

Is there any way to handle a binary data from REST request as arraybuffer in web? #69819

Hi,I come across a problem when making a REST request which returns a binary data image (the same as this stackoverflow thread). The solution in this thread is to convert the response into arraybuffer, aka response.arraybuffer()
Is there any solution the same as this for Flutter, I have tried many ways to convert the weird binary string like this (�(�f����G���������X?�������pF�4��.0�׉��X��������C�gU��I��|�,��������t���������)
by base64.decode() or utf8.encode() but no use.
This is my request:
 final response = await dio.get( Consts.baseUrl + endPointUrl, queryParameters: { '_id': 'xxx', 'type': 'xxx' }, options: Options( responseType: ResponseType.bytes, headers: {'Authorization': Consts.apiKey}, contentType: 'application/octet-stream', ), ); 
And when I debug the response, I got : https://user-images.githubusercontent.com/33485572/98230620-079f4080-1f64-11eb-828d-341d690df34a.png
Thank you. Hope to hear from you soon.
submitted by Harrisonnguyen1210 to flutterhelp [link] [comments]

Forex Trading VS Binary Options Trading Philippines Automated copy trading  Binary options trading signals ... How to Creat Binary com API Token 3.03 Options API BinaryConnect Automation of Binary Option with MT4 EN How to make API TOKEN and make automatic trading using AUTO TRADER WEB Signal Hive with Binary.com API Integration. Binary.com API bridge

Binary options api Get 9 binary options api binary option plugins, code & scripts on CodeCanyon. Trade Bitcoin Perpetual & Futures: 100x leverage. Binary.com Developers Authenticate. Update: the target platforms for the …. Alligator indicator binary options. Jul 27, 2010 · "My dad has recently gotten involved with trading binary options online. If your prediction is correct, you receive the ... API guide App registration. Before using the API, you must register your application: Open an account at Binary.com (either a Virtual Account or a Real Account). Go to Security & Limits, select API token and create a new token with the admin scope. Register your app to obtain your app_id. Client authentication. Certain API calls require client authentication (e.g. portfolio) whilst other calls ... Binary options brokers will generally have their trading platform open when the market of the underlying asset is open. So if trading the NYSE, Nasdaq, DOW or S&P, the assets will be open to trade during the same hours as those markets are open. Any moves by the Federal reserve for example, will feed into binary markets immediately, just as you would expect. Forex trading has no central market ... Binary Options Api. You have a new API Token. Binary.com has 166 repositories available. Some. Learn more about Responsible Trading. The best what you can do while implementing public-facing API, is to let your …. Do not ask me again The products offered on the binary options api Binary.com website include binary options, contracts for difference ("CFDs") and other complex derivatives. You ... Binary.com Developers Authenticate. Get your API token. Send Request. Reset Connection. You are not authenticated. Scroll To Bottom Toggle Theme ... Trading binary options has large potential rewards, but also large potential risks. You must be aware of the risks and be willing to accept them in order to trade binary options. Don’t trade with money you can’t afford to lose. Day trading, short term trading, options trading, and futures trading are risky undertakings. cpp cpp11 broker maths forex-trading broker-api binary-options binary-option forex-data intrade-bar payout-model binary-options-statistics Updated May 19, 2020; C++; NewYaroslav / bo-science Star 0 Code Issues Pull requests Различная полезная информация про бинарные опционы . binary-options binary-option binary-options-statistics Updated Jan 17 ...

[index] [23498] [12022] [4688] [28734] [10842] [25857] [27439] [20396] [21709] [5547]

Forex Trading VS Binary Options Trading Philippines

Automated copy trading Binary options trading signals Hi! I'm Lady Trader and today I'm gonna show you how can you copy me in pocket option. More my binary... Make 10 usd Every 50 Seconds Trading Binary Options 100% WINS - Profitable 2018 Trading strategies - Duration: 5:58. Proudly Tech Money General Tips And Tricks 45,339 views binary options account manager binary options api. Loading... Advertisement Autoplay When autoplay is enabled, a suggested video will automatically play next. Up next Module ... Hi, in this video we explain how it is possible to automate binary options using MT4 and send signals to almost any broker in the world. Therefore we use BinaryConnect and its API. The world's first semi-automatic and fully-automatic Binary Options trading suite with full singals, community, chart analysis and Binary.com integration. CONECT WITH ME TO GET AUTOTRADERWEB https://goo.gl/7tRX2n How to make API TOKEN and make automatic trading using AUTO TRADER WEB BINARY OPTIONS SOFTWARE 2018 options trading, options strategy and ... Python API串接實戰 Binary.com為例 - Duration: 1 ... Binary Options MT4 - Duration: 5:58. Tim Brankin 42,078 views. 5:58. How to Avoid Losing More Money Than You Profit in Forex STEP-By ... How to create a Binary.com API for MT4 Binary Option Trading - Duration: 1:38. ForexRobotz 872 views. 1:38 . Using REST APIs in a web application Quick PHP Tutorial - Duration: 9:21. WebConcepts ...

https://arab-binary-option.redgrimarpa.ml