#or at least without involving external libs
Explore tagged Tumblr posts
logicpng · 2 years ago
Text
Tumblr media Tumblr media
I think I can with certainty say I'm past the halfway point with this. there's not that much random dialogue left to make up
I can only hope the switch works as intended on other computers, since a different timing left them mid-transition. it seems like it doesn't interrupt the bubbles switch but it's only if the menu switch/shell reset comes at a specific point before it 🤔
sakurascript is really weird with calling functions, but I Think if you call it as a variable ( %(function) ) it doesn't interrupt the script?? maybe??
[Image ID:
Two gifs showing off Vega complaining about the messiness of Windows' system32 folder, providing the user with a link to open it and see for themselves, and the right click context menu changing its color scheme alongside Vega switching to Rigel.
End ID]
153 notes · View notes
kevindew · 6 years ago
Text
How to write an OpenAPI 3 parser
Two years ago I pushed my first commit towards openapi3_parser, which is a Ruby gem to convert a file conforming to the OpenAPI 3 specification into a graph of Ruby objects. I felt that this milestone marked an appropriate time to reflect on my learnings from this process and explain some key decisions. When I began working on this gem were only a few, limited implementations of parsers for OpenAPI version 3 that I could take inspiration from and none in Ruby. This helped drive my interest in the project as I felt there were fresh challenges ahead.
This article serves to share information I’d have found useful when I started this project, with the aim of helping others who are deciding how to work with OpenAPI files in any programming language. It’ll talk you through some of the key design decisions for an OpenAPI parser and explanations of the routes I took.
Do you even need an OpenAPI parser?
The first thing to ponder is does anyone need a specific tool to parse an OpenAPI file? After all, they are either YAML or JSON files and these file types can easily be converted into an associative array structure in most modern programming languages.
The answer to this is yes, though you probably could do without one if it wasn’t for one pesky - but rather central - feature: references. These allow you to either reference data in another part of the OpenAPI file or in a separate file, these referenced data can then contain subsequent references so you can quickly end up with a complex data structure.
As well as resolving references a parser offers numerous advantages to working with the raw data: it allows you to convert the data into domain objects with documented interfaces, it offers the potential for elegant ways to traverse the OpenAPI structure, helper methods can be added to classes representing individual objects and data validity can be determined in advance to working with data.
Were there to be a data format similar to YAML or JSON that offered a native ability to handle references it would be substantially easier to work with OpenAPI files and building a parser would be a massively simpler experience, but as far as I'm aware no suitable options exist.
Starting with the user experience
One of the first things to consider is how a user, whether yourself or a wider audience, is going to experience a parser. With openapi3_parser I aimed to have the least friction between a users input to them working with the data.
For example:
document = Openapi3Parser.load_url("https://raw.githubusercontent.com/OAI/OpenAPI-Specification/master/examples/v3.0/petstore.yaml") document.paths["/pets"].post.description => "Creates a new pet in the store. Duplicates are allowed"
On the root module, Openapi3Parser, there are different methods to load data in: load, load_file and load_url which respectively allow loading raw data, loading from the filesystem or loading based on a url. Each of these methods returns an instance of the Document class which is an object that represents the overall document and provides an interface into the OpenAPI data.
This, I believe, achieves a fluent user experience where the guts of the parsers implementation is hidden from the user and they can quickly be working with the data.
Getting data into a parser
Given an interface of how someone will use a parser you then need to know how to get that data into it. The simplest approach would be to have the user of your parser convert the YAML or JSON data of an OpenAPI file into an object structure and pass this in.
For example:
openapi_data = YAML.load_file("openapi.yaml") Openapi3Parser.load(openapi_data)
As previously mentioned openapi3_parser has 3 individual methods for input, these actually expand into a variety of ways a user can intialise the parser. The load_file method accepts a file path string, the load_url method accepts a URL string, and the load method can accept a Hash, a File, a string input and - given we��re working in the duck typing world of Ruby - objects that match the interfaces of these.
This amount of options may seem suprisingly large, and superfluously flexible. However, most of these are actually by-products of the need to working with a reference system that can support external files.
As a reference can link to a URL or a file we need the ability to load these items automatically. This leads us to the underlying code needed for load_file or load_url. The process of turning a file into parsed data involves us working with a File object, parsing the string input of that file and finally working with the contents of that file converted to a Hash. Thus, as all these steps are needed for working with a separate file, we can then use them to deal with a variety of input.
Behind the scenes in openapi3_parser these individual files (or raw data input) are represented as Source objects that encapsulate the concerns of that particular file. These Source objects are initialised with an instance of SourceInput which has subclasses for Raw, Url and File variances that reflect the 3 types of user input we anticipate. Common libraries are used to turn any string input (whether loaded from a file or as actual user input) into a basic hierachical data structure.
Many of the parsers available for OpenAPI don’t provide support for references that exist in separate files, which allows them to bypass this complexity. This is an attractive option for simplification of a parser as references to separate files don’t seem to be commonly used and the complexity they introduce is high.
Turning data into types
If we want to provide a user with classes that represent the various OpenAPI objects then we need to take the hierachicial data from an OpenAPI file and use this to build specific objects. This conversion process can be rather simple: input the underlying data into an object that represents the root OpenAPI object and have that embrace a decorator pattern to provide methods for it's properties, each of these can then return their own decorated class for their object.
For example:
class Node::Openapi def initialize(data) @data = data end def info Node::Info.new(@data["info"]) end # .. end
There are challenges to resolve however and this approach can quickly increase in complexity.
The first challenge is references. Any referenced data would need to be resolved either prior to this objects creation or at the point a reference is found. I opted to do this at runtime in openapi3_parser and used a Context class as a means to bridge an object and it’s underlying data source. This Context object can be then be used to resolve references when they are encountered.
Another challenge is whether items are existant or valid. Since the data from an OpenAPI file is just sourced from YAML or JSON there are no guarantees that you will have data available at any point and whether it’ll be of the type you anticipate. My approach was to use a layer of validation that checked the input type. I felt this offered the ability to error early when an object had invalid data rather than a user receiving a cryptic Ruby NoMethodError, this also meant the documentation for return types could be accurate and reliable.
Dealing with these complexities led me to switch to a different system to the simple example above for builing objects. Rather than have a single class per object that had both usage and building responsibilities I switched to a system where there are factory classes to build each OpenAPI object. This offered large gains in simplicity where the classes for working with an individual object are mostly trivial in complexity, however it presented the disadvantage that for every OpenAPI object there are two classes to consider.
As there are many (~30) different types of OpenAPI objects I took advantage of the metaprogramming technique in Ruby to set up a DSL that could be used to define the fields and their respective types concisely.
Working with references
The most complicated part of writing an OpenAPI parser is working with references, particularly supporting those in different files.
There are a number of things to understand and consider in resolving them. First references can reference other references, so the resolving process may need to look up multiple document locations, and second references can be recursive which can easily lead to infinite loops if not these are not considered.
There are a number of things that can go wrong with references that may also need to be handled: the reference could point to a location that doesn't exist, the data at the location could be the incorrect type, a file may not be available and references could point to other references in a cyclic structure.
The approach taken in openapi3_parser is to have a number of classes with key responsibilities to handle a reference. There is a class to represent a reference object that facilitates looking up a reference and has a check for a cyclic pointer. The Source class has a method to determine the source file and return a ResolvedReference object that mediates between the resolved data and the initial reference. The ResolvedReference class has the ability to either build the object or communicate any issues that prevent building it. The final key class used is a ReferenceRegistry, this is an object associated with the document and stores all the source files and references used - this class allows the same objects to be used each time the same reference is encountered, which can vastly increase the speed of parsing.
Validating input
Parsers aren’t necessarily validators but they have to understand the source files in similar ways. In my experiences with openapi3_parser I found that in order to provide accurate return types a substantial amount of validation logic was needed. I decided to take this further and aim to add the remaining validation rules so that the gem could provide full validation.
Whether this was a great idea or not I am not sure. A consequence of having a good understanding of the rules has meant that openapi3_parser is quite strict and will raise an exception to inform users of an error. Having seen that it isn't especially rare to find small mistakes in OpenAPI files (a common one being a description with a reference) I do wonder if the strictness can be a burden rather than a helper to users, it has led me to consider whether a lenient mode would be benficial.
In the time since work began on openapi3_parser there has been momentum around providing an official JSON schema file that can validate OpenAPI files. So it could be that over time the relevancy of a parser providing validation decreases. However it is also the case that JSON schema cannot provide everything needed to validate to the OpenAPI spec (for example it cannot validate that a reference resolves or to the correct type) so it may remain to be seen if the JSON schema approach is good enough to become the de-facto validation method.
With hindsight I would advise someone building an OpenAPI parser to only worry about performing validation on areas where it is necessary to have a parser conform to an API - which essentially would be just performing type checks.
Working with different versions of OpenAPI
At the time of writing the current version of OpenAPI is 3.0.2 with 3.1.0 on the horizon and changes coming. From the past there is Openapi 2.0 which shares a specification with Swagger 2.0. Currently when trying to find an OpenAPI parser there is frequently confusion as to whether it will support OpenAPI v3.0 or v2.0 - with it being rare that both are supported.
In building a parser you'll have to consider and communicate what versions of the specifications are supported and how you'll handle future versions. With openapi3_parser I chose to name it with the version number to make it distinct from the open_api_parser provided by braintree that only supports swagger 2.0 - at the risk of dooming this project when OpenAPI 4.0 comes out. I didn't see value in providing a parser that supported both 2.0 and 3.0 since these are different specifications and 2.0 already had parsers.
Looking towards the future I'm anticipating there to be newer versions of the specification to consider that will need different parsing logic. To support this I set up a basic system that produced a warning were the version of OpenAPI not to match. Into the future I'm hoping that pre-determining the minor version used will allow it to be not too difficult to handle multiple versions of OpenAPI with minimal changes to the parser.
Quirky things to handle
In the process of building an OpenAPI parser there a few things that feel somewhat inconsistent to the rest of the spec that you may have to deal with. Here is a catalogue of ones I encountered and how I approached or intend to approach them.
Path Item reference
Throughout most of OpenAPI an object can either be a reference or be an object, there is no middleground, you can't share extra data in a reference to adjust a referenced object. However an exception to this rule lies in the Path Item object which allows a form of merge behaviour between the referenced object. In theory the referenced Path Item object could reference a different Path Item object too so the merge could contain multiple layers of data.
The approach taken in openapi3_parser for this is to validate referenced objects separately and merge the data together at the point of building the object.
The Link object
The Link object in OpenAPI Specification features two mutually exclusive fields that associate a link with an operation: operationId and operationRef. The operationId field is the simpler of these two, which is an id value that must be represented on an Operation object within the document. The operationRef is more complicated as it is a reference that may refer to the current file or an external file.
At the time of writing this is an area of the OpenAPI specification that openapi3_parser does not support. My expectation is that operationId will be relatively simple to support with runtime lookups to find a particular operation whereas operationRef will be treated similar to other references.
Schema names
Schema objects can refer to a property outside the object, the schema name. This is data that is inferred from a schemas location within the #/components/schemas map, for example a schema defined at #/components/schemas/Dog has a name of "Dog". This is somewhat problematic as schemas defined directly (within a Media Type object or another Schema object) do not have a name. This issue is confused further by the presence of a title field on a Schema object which is unrelated to Schema name.
Life would be simpler if the title field was the preferred way of naming schemas, this would allow any Schema to be named no matter where they are defined and would match conventions of other referenced definitions in the components map that don't make any use of their identifier outside of references. However by convention authors rarely use the title field and frequently define most schemas in #/components/schemas so it would be superfluous to use the title field as well.
In openapi3_parser a method has been added to Schema objects to extract their name based on the location in the file they have been defined.
Servers
The Server object can be defined in 4 locations of an OpenAPI file: OpenAPI object, Path Item object, Operation object and Link object. An unusual charaterstic of this object is that it acts in a cascading form where looking up the servers of an operation would return an operations servers if available, otherwise the path items ones and if none of those the ones from the root OpenAPI object. Unusually too the root servers property on OpenAPI defaults to an array with a single Server object, as far as I'm aware this is the only instance where OpenAPI defaults to an object within a collection.
This is a part of openapi3_parser that is yet to be implemented (it's pretty much next on the list) creating the default object may well test the system in place for array defaults to provide the object. For the cascading properties I imagine the simplest approach would be to use a runtime lookup when accessing an operations servers then fall back to path items and then the root object. To provide this there'll need to be a simple way to traverse down the graph back towards root.
Default values for maps and arrays
An area that I don't think is cracked properly with openapi3_parser is the approach to take for default values of array and map objects, specifically whether the absence of a value indicates an empty collection or a null value. The OpenAPI specification doesn't define the behaviour of these so it is left to the interpretation of the parser.
Initially openapi3_parser approached these with an opinion that they should never be null and default to empty collections. This felt like an approach that was developer friendly as you could anticipate that they'd always return an instance of the same class (the collection) and you'd not have to consider whether it's null or empty.
This however didn't work out and was most noticibly flawed on the Schema object with the anyOf, allOf and oneOf fields. JSON Schema defines valid fields for these to be an array of at least one item, so an empty array was contravening the specification. The default value of an empty array also implied a meaning in comparison to other fields that could be null.
For example:
if schema.oneOf # meets condition but is an empty array # we presume there is a oneOf end if schema.description # can be a string or null # we presume presence of a description end
To handle this in openapi3_parser many of the map and array objects can be a collection or null and are not coerced into an empty collection. This is the case for any scenarios where the presence of this collection may imply a meaning such as on a Schema object. In places where presence doesn't imply additional meaning an empty collection can be defaulted. This applies for example to all the child properties of a Components object which makes them simpler to traverse.
Ideally it'd be great if the specification itself was clear on whether the absence of data should be treated as an empty collection or a null value. This would resolve ambiguity for parser implementors.
Reflection
So, like I said at the start of this article this marks 2 years since I started working on openapi3_parser and I'm somewhat amazed that a) I'm still interested in it and b) that it's still not reached version 1.0. I'm pleased that during the time period there haven't been too many drastic changes in the OpenAPI specification which has managed to maintain the projects relevancy. It's been a fun ride for me learning more about the specification and trying to define an intuitive, maintainable API for developers to use.
My next aims for the project are to continue working towards 100% compliance with the specification and then to consider the 1.0 release of the project. After that I'd like to identify any methods that can make it easier to work with the API - like quick ways to look up particular paths or schemas. I'd also love to get more contributors involved, do get in touch if that's of interest to you.
In the time since I've started the project I've noticed that at least 2 similar projects for Ruby have been released: openapi_parser, and oas_parser. Which are alternatives to consider if you are looking for a Ruby OpenAPI parser and don't like mine :-). It does seem a shame to me that we've ended up with multiple projects solving similar problems so if any of those developers of those are interested in collaborating and consolidating these to a single parser please get in touch.
I'm interested to learn if anyone finds this information useful for parsers they are creating or wants help with one they are working on, if this is you please ping me on GitHub so we can talk more.
0 notes
psychotherapyconsultants · 7 years ago
Text
The Pressure Cooker Before College: How to Navigate and Actually Help Your Teen
The senior year countdown to college brings out parents’ worries and fears alongside teens’ own anxieties and self-doubt. During this time of escalating pressure and stress in families, parents can fall into common traps that defeat their intention to help and interfere with teens developing capacities that are the foundation for succeeding once they’re at college.
When the dynamics associated with these traps are at play, parents become part of the problem rather than a resource for help. Approaches that seem instinctive, or even necessary, paradoxically derail teens and increase their need to avoid parents. Awareness of these traps and being prepared with positive alternatives empowers parents to brings out the best in teens, install an experience of parents as someone they can turn to later on, and foster psychological growth that makes it more likely that teens will adapt to college.
Common parenting traps:
Trap 1. Overfocus on achievement, getting into a prestigious college, and/or pursuing the right career path.
It’s easy to get caught in the frenzy to get teens into the most competitive college or be blinded by our own vision for them. Teens are embedded in a cultural agenda where success is defined by perfectionism, status, and how things appear. But parents’ attitude and state of mind can either ground them or ramp up the pressure.  
The fear driven need to “secure” our teen’s future sets up a high stakes’ equation with dire consequences if they don’t live up to expectations – not the least of which is catastrophically disappointing their parents and “failing in life.” Here teens internalize a lack of faith – increasing their own insecurity and uncertainty about their future as well as compounding confusion about who they are and who they should be. Further, when teens are overwhelmed and anxious, executive functions shut down — making it harder to stay on track.   
Imposing our own agenda onto teens breeds superficial compliance, passivity, and internal angst whereas supporting teens’ evolving identity fosters sustainable self-motivation, curiosity, and purpose. Through being calm and open-minded, parents can nurture resilience and flexibility in teens — capacities associated with success and mental health — rather than perpetuate the myth that everything rides on a particular decision or path.
Positive alternative:
Practice letting go of trying to control the outcome.
Have faith.
Envision scenarios that differ from what you had imagined.
Leverage teens’ own positive motivation and don’t use fear tactics.
Focus on being in the present moment  .
Work on calming your anxiety and being a responsive, “non-anxious presence” (Stixrud, 2014).
Allow easy interactions: make it a point to not have the majority of your contact consist of you bringing up stressful topics, reminding teens to do things, or questioning them.
Trap 2: Seeing teens as a finished product and panicking that it’s your last chance to impact them.
Teens are a work in progress. They will continue to change and mature. If we look back at our own lives or have ever gone to a high school reunion, we’re reminded that our high school selves do not have to determine or foreshadow our future. Exaggerating the stakes at this juncture is a sign of loss of perspective and creates a counterproductive atmosphere of panic, pressure and doom. Alternatively, a climate of acceptance, faith, and possibility is not only more grounded in reality but expands teen’s psychological bandwidth and capacity for recovery and perseverance in the face of a range of outcomes.
Repetitive focus on issues you’ve never had traction with before not only demoralizes teens, but causes parent burnout and erodes the relationship. Alternatively, noticing your teen’s genuine strengths builds on their competencies and successes, helps insulate them when facing weaknesses, and promotes improved performance and attitude.
This approach gives teens a positive experience of being around you before they leave home which not only generates inner security, but will allow them to reach out to you when they’re on their own (since parenting is not over yet). When teens leave home, their relationship with parents has the potential to become more peaceful, less conflictual, and closer — and frequently does. With autonomy a given and physical separation providing needed distance, control struggles become less relevant, parents are forced to let go, and teens are freed up to be more receptive.
Positive alternative:
Notice your teens’ strengths and competence.
Appreciate the good in your teen.
Create opportunities to spend time with teens by offering to do things they like or that they will find helpful (take them out to eat, give them a ride) but on their schedule and not from a position of neediness.
Trap 3. Taking charge of teens: rescuing or being a stand in for them.
Teens on the performance treadmill who “succeed” without incident in high school, but fail to develop a secure sense of self, may crash with less support in college when faced with increasing challenges and disappointment. Without a realistic sense and acceptance of their own strengths and weaknesses, or the skills to deal with inevitable “failures,” teens will be ill-equipped to cope (Margolies, 2013). Taking charge of their lives for them deprives teens of the space to learn how to manage themselves, solve problems, and try out what they can do while still at home.
To be helpful, parents must find a way to have faith, let go of (the illusion of) control, and respect teens’ separateness from them — bearing the feelings of loss inherent to this transition. In an improved parenting model, your teen is in the role of director of his own life — with you as consultant, not owner. This approach not only reduces struggles and empowers parents to be more effective, but positions the relationship to be compatible with a structure that will work when they’re at college.
Instead of imparting wisdom, telling them what to do, or doing things for them — parents’ role is to facilitate teens finding their own way and help them think things through. This involves being a “non-anxious”, non-intrusive, but available and responsive presence — letting teens take the lead about how, and when, you can help.
Teens are more likely to interact when parents show an unbiased interest in their opinions, what they enjoy, and their expectations of themselves — from a stance of curiosity without an agenda — demonstrating respect for their separateness and boundaries.
This parenting approach supports teens’ ability to reflect, weigh options, and make decisions from an internal sense of themselves — cultivating autonomy, identity, and competence (Nagaoka et al., 2015). By promoting the development of internal scaffolding, parents offer teens real protection in the form of greater capacity to master future challenges.  
Positive alternative:
Let your teen be responsible for his life.
Offer, don’t impose, help and consider timing — following your teen’s lead.
Prioritize investing in your future relationship, rather than struggles.
Promote autonomy and mastery — the building blocks of self-motivation (Nagaoka et al., 2015).
Help teens discover themselves — the basis for good decisions (Nagaoka et al., 2015).
Trust that your teen wants his life to work out, is doing the best he can (different from your best) and will find his way.
These traps all involve a loss of perspective — fueled by fear, blurred boundaries, and overidentification with teens. When we’re anxious and overly focused on external goals, it restricts our field of vision — and we lose sight of our teen as a person. Teens of parents who are caught up in these dynamics talk about feeling alone, despite how involved their parents are. They experience their parents as out of touch with who they are and how they feel inside — unaware of what life is like for them day to day, what they care about, how they think and feel, and what’s important to them.
Our values and mindset are transmitted to our children, not by telling them how much we support them, but through our own emotional state and through what we notice, are impressed with, and praise or discourage in them. Unless teens internalize the subjective feeling of being accepted and supported they will be vulnerable to concealing and hiding in fear/shame when they’re in trouble or need help — a frequent cause of unforeseen difficulties at college that spiral out of control. A positive relationship as experienced by teens is the number one way to invest in their future because it allows parents to stay relevant and impact them even when they’re no longer a captive audience.
References:
Margolies, L. (2013). The Paradox of Pushing Kids to Succeed. Retrieved from https://psychcentral.com/lib/the-paradox-of-pushing-kids-to-succeed/
Nagaoka, J., Farrington, C.A., Ehrlich, S.B., Heath, R.D. (June 2015). Foundations for young adult success: a developmental framework. Concept paper for research and practice. Retrieved from https://consortium.uchicago.edu/publications/foundations-young-adult-success-developmental-framework
Stixrud, William R. (2014, November). Teaching the Stressed, Wired and Distracted Teenage Brain. Paper presented at the Learning and the Brain Conference: Focused, Organized Minds: Using Brain Science to Engage Attention in a Distracted World, Boston, MA.
from World of Psychology https://psychcentral.com/blog/the-pressure-cooker-before-college-how-to-navigate-and-actually-help-your-teen/
0 notes