PL/SQL is supported by ctags per the documentation mentioned here. However I can't get it working properly. Particularly I found tags were not generated for some of the functions. So I was reading through the ctags manaual looking for the answer and then I saw this:
After I ran it I got this:
So prototypes are disabled by default. That looks promising. I enabled it by doing this:
Palindrome is frequently seen in all kinds of computer science textbooks. A palindrome is a word, phrase, number, or other sequence of units that may be read the same way forwards and backwards. The intuitive approach to check if a string or a linked list is a palindrome is to use stack. Stack is a LIFO (Last In First Out) data structure, which perfectly fits to check palindrome.
Here provides a simple Java implementation for a linded list:
There are many ways to find the kth to last element of a singly linked list. If you know the size of the linked list, the problem would be very easy since the kth to last element is the (length – k)th element.
Assume we have no idea about the size of the linked list, what can we do?
Here provides a recursive algorithm. Let’s treat k = 0 as the last element, k = 1 as the second to last element, and so on. The idea is to recurse to the last element. Each recursive call adds 1 to the counter. The stopping condition is to reach the last node and then returns -1. Actually, the algorithm only returns one thing which is the reverse index value. The more complicated implementation would have to return the node too. Here is my simple implementation in Java:
This summer I was working on this interesting project. I believe WebSocket is going to be very popular in the near future. With WebSocket, we can build web-based interactive games, stream dynamic media or even bridge existing network protocols. These things are almost impossible to be done using HTTP and AJAX. If you want to know more about the benefits of WebSocket, you can read this article. Here I mainly focus on the implementation of a WebSocket Server. My WebSocket server is based on RFC6455.
Once a connection to the server has been established, the client MUST send an opening handshake to the server.
An HTTP/1.1 or higher GET request.
A Host header field containing the servers authority.
An Upgrade header field containing the value ”websocket”, treated as an ASCII case- insensitive value.
A Connection header field that includes the token ”Upgrade”, treated as an ASCII case- insensitive value.
The value of header Sec-WebSocket-Key MUST be a nonce consisting of a randomly se- lected 16-byte value that has been base64-encoded. The nonce MUST be selected randomly for each connection.
Optionally, an Origin header field. This header field is sent by all browser clients. A connection attempt lacking this header field SHOULD NOT be interpreted as coming from a browser client.
Once the clients opening handshake has been sent, the client MUST wait for a response from the server before sending any further data. The client MUST validate the servers response as follows:
If the response lacks an Upgrade header field or the Upgrade header field contains a value that is not an ASCII case-insensitive match for the value ”websocket”, the client MUST fail the WebSocket connection.
If the response lacks a Connection header field or the Connection header field doesnt contain a token that is an ASCII case-insensitive match for the value ”Upgrade”, the client MUST fail the WebSocket connection.
If the response lacks a Sec-WebSocket-Accept header field or the Sec-WebSocket-Accept contains a value other than the base64-encoded SHA-1 of the concatenation of the Sec-WebSocket-Key with the string ”258EAFA5-E914-47DA-95CA-C5AB0DC85B11” but ignoring any leading and trailing whitespace, the client MUST fail the WebSocket connection.
If the server chooses to accept the incoming connection, it MUST reply with a valid HTTP response indicating the following.
The first line is an HTTP Status-Line, with the status code 101.
The Connection and Upgrade header fields complete the HTTP Upgrade.
To prove that the handshake was received, the server has to take two pieces of informa- tion and combine them to form a response. The first piece of information comes from the Sec-WebSocket-Key header field in the client handshake. For this header field, the server has to take the value and concatenate this with the Globally Unique Identi- fier (GUID) ”258EAFA5-E914-47DA- 95CA-C5AB0DC85B11” in string form. The server would then take the SHA-1 hash of this, which is then base64-encoded to give the value ”s3pPLMBiTxaQ9kYGzzhZRbK+xOo=”.
This completes the server’s handshake. If the server finishes these steps without aborting the WebSocket handshake, the server considers the WebSocket connection to be established and that the WebSocket connection is in the OPEN state. At this point, the server may begin sending (and receiving) data.
Either peer can send a control frame to begin the closing handshake. The Close frame contains an opcode of 0x8. Upon receiving such a frame, the other peer sends a Close frame in response, if it hasnt already sent one. Close frames sent from client to server must be masked. Upon receiving the control frame, the first peer then closes the connection.
After sending a control frame indicating the connection should be closed, a peer does not send any further data; after receiving a control frame indicating the connection should be closed, a peer discards any further data received.
A high-level overview of the framing is given in the following figure.
FIN: When set indicates that this is the final fragment in a message.
RSV1, RSV2, RSV3: MUST be 0 unless an extension is negotiated that defines meaningsfor non-zero values.
%x1 denotes a text frame
%x2 denotes a binary frame
%x8 denotes a connection close
Mask: Defines whether the ”Payload data” is masked. All frames sent from client to server have this bit set to 1.
Payload length: The length of the ”Payload data” in bytes: if 0-125, that is the payload length. If 126, the following 2 bytes interpreted as a 16-bit unsigned integer are the payload length. If 127, the following 8 bytes interpreted as a 64-bit unsigned integer (the most significant bit MUST be 0) are the payload length. The payload length is the length of the ”Extension data” + the length of the ”Application data”. For this implementation, you don’t have to worry about the ”Extension data”. So we assume the length of the ”Extension data” is zero, in which case the payload length is the length of the ”Application data”.
Masking-key: All frames sent from the client to the server are masked by a 32-bit value that is contained within the frame. The masking key is a 32-bit value chosen at random by the client. When preparing a masked frame, the client MUST pick a fresh maskingkey from the set of allowed 32-bit values. To convert masked data into unmasked data, or vice versa, the following algorithm is applied.Octet i of the transformed data (”transformed-octet-i”) is the XOR of octet i of the original data (“original-octet-i”) with octet at index i modulo 4 of the masking key (“masking-key- octet-j”):
Payload data: The “Payload data” is defined as “Extension data” concatenated with “Application data”. It is important to note that the representation of this data is binary, not ASCII characters.
Magic Shape is the final project for my Human Computer Interaction class at Baylor. It is a solid geometry edutainment application for children using tangible augmented reality.
This article describes the development of the Magic Shape prototype. The purpose of Magic Shape is to teach the following concepts:
The recognition of basic 3D shapes such as cube, cylinder, sphere, cone and pyramid.
The number of faces of basic 3D shapes.
The number of edges of basic 3D shapes.
Volume and surface area of basic 3D shapes.
There are three modes for Magic Shape
The interaction interface is a combination of AR and TUI. Using AR, the system can render different types of 3D shapes, such as a cube, cylinder or sphere. All they need is a piece of paper. Tangible cubes are also used in the system. The reason for that is we want children to learn about the concepts of volume and surface. By using these physical cubes, children can also build their own 3D shapes.
In the initial prototype, there are 8 tangible objects being tracked as shown in the following figure.
A cube: view a 3D cube
A cylinder: view a 3D cylinder
Scissors: cut the 3D shape into 2D diagram
A wireframe box: display the wireframe
A green pen: change the shape color to green
A purple pen: change the shape color to purple • A zoom-in tool: make the 3D shape bigger
A zoom-out tool: make the 3D shape smaller
All of these objects are attached with fiducial markers. The markers are default markers from the ALVAR tracking library and are rated highly for tracking performance.
Tangible User Interface
Magic Shape has a set of physical cubes as shown in the following figure.
With these physical cubes, children can build and create any shape they want and these new shapes can be recognized by the system in real-time. Also Magic Shape can calculate the volume and the surface area of these new shapes and display them to the children. By default, each physical cube is considered one unit cubed. So the volume simply equals to the number of cubes. On the other hand, calculating the surface area of the shapes is much harder. In our prototype, we use augmented reality and computer vision technology to support the tangible user interface. So the system only knows whether the cube is present or not and it has no idea whether the shape is connected or not. To make things simple, the system assumes that each shape is connected and only one side of a cube is connected to another one. In this case, the system computes the surface area using the following formula: Number of Cubes × 6 − 2 × Number of Connected Sides
In practice mode, the system can render five types of shapes: cube, cylinder, cone, pyramid and sphere. Each of these shapes can be manipulated using the following operations:
Rotate: children can rotate the 3D shape by rotating the marker.
Move: children can move the 3D shape by moving the marker.
Scale: children can make the shape bigger by showing the zoom in marker and make the shape smaller by showing the zoom out marker.
Change color: children can change the color of the shape by showing the different color pen marker.
Cut: children can cut the 3D shape into a 2D diagram by showing the scissors marker. It allowschildren to count the number of faces
Wireframe: children can see the wireframe of the 3D shape by showing the wireframe marker. It allows children to count the number of edges
Children can also explore the volume and surface area of different shapes they build in this mode. Magic Shape will give real-time feedback to whatever shapes children build in the system. In this way, children can observe the impact of changing a shape on its volume and surface and it also reinforces the relationships of the number of blocks to volume and the number of visible sides to surface area.
In testing mode, we use augmented paper to present questions and children are asked to answer these questions accordingly. Basically, this is an Q/A interface and Magic Shape will tell children whether they answer correctly or not. For the initial prototype, there are two kinds of questions in this mode. One scenario is that children are asked to build any shapes that match the given volume or surface area. Another scenarios is that the system lets children build a new shape using the tangible cubes and then asks them to calculate the volume and surface area of the new shape. One example of the first scenario is presented in the following figure.
In game mode, children can play an augmented reality marble game. But before that, they have to build a maze and obstacles using the shapes that they have learned in Magic Shape. They can freely build a combination of shapes to form a maze and obstacles and then play the AR marble game in this maze they build. In the game, children must guide a virtual ball through the maze and obstacles that they build on the board by tilting and translating the board.
Magic Shape is implemented using GoblinXNA, which is a platform for research on 3D user interfaces, including augmented reality and virtual reality. Marker tracking is done using ALVAR tracking library developed by VTT technical Research Center. The prototype runs on a thinkpad laptop with Windows7 operating system and it is equipped with a Logitech Pro 9000 webcam. During all modes of the system, Magic Shape runs at approximately 60 fps.
Inter-patient variability is a major contributing factor to adverse outcomes in many areas of medical practice and especially in conscious sedation procedures. For example, two patients of similar height, weight, and age may react very differently to the same sedative, resulting in over sedation or under sedation. In many of these cases, patients who are assessed to be at minimal risk are actually at high risk. Then, problems, such as unexpected respiratory depression, cause an adverse outcome, such as neurologic damage or death. Current training programs mainly train students through lectures and real world experience, which logistically cannot bring students to competency in an acceptable amount of time. To address this problem, this proposal aims to develop an immersive mixed reality (MR) based training simulator for educating students about variability in conscious sedation procedures.
This project is an essential part of the conscious sedation simulator. After Action Review module allows research team to review past training experiences of the users by providing playback of recorded training experiences.
Upper GI Wizard of Oz Interface
The Wizard of Oz interface will be operated by a member of the research team as a control mechanism accepting communication from the PKPD Variable Modeler, Human Patient Simulator and any other components built or included in the future. At any time a researcher will be able to send operations to the virtual patient. These actions can also be delayed based upon the Upper GI protocols and sedation reactions. In addition the WoZ application will control sound, character animations, drug administration, and provide output of the data from the PKPD modeler representing vital statistics.
Interaction with the PKPD Modeler is accomplished using TCP/IP socket connectivity. Data received from the PKPD modeler will be parsed forming the appropriate RenMessage to send to the REN Application.
PKPD Variable Modeler
The PKPD model (PKPD) is an executable program for Windows OS that controls drug and patient variables. Pharmacokinetics (PK) describes how the drug distributes through and is eliminated by the body. It is often represented by a multi-compartment numerical model. Pharmacodynamics (PD) is what the drug does to the body. The PK model will track drug concentrations in several interconnected compartments that are roughly analogous to different body tissues. The PD model will read the drug concentration in one of the compartments and scale a physiologic change such as heart rate, breathing, airway obstruction, oxygen saturation, exhaled carbon dioxide or blood pressure.
Upper GI Ren Application
The Upper GI REN Application accepts RenMessage from the WoZ application. The type of RenMessage will dictate the type of animations, sound, and emotional state of our virtual patient.
Event Centric Logging
After Action Review(AAR) module should be event centric. A global data structure is needed to hold all of the events during an exam in chronological order. AAR should have a GUI interface showing the event and its index number so that AAR users can see the events in a time centric view. PKPD data is needed for all the events. For example, event name for moving the leg at maximum distance is “move left leg max”. It will play the animation ‘leg_max’ and will respond with the text message ‘left leg max’. There is no sound or emotion tied to this event yet. We also needs event so that we can re-play the proper animation. Students input should be retrieved too so that we can replay that as well. Here is the general format for the data structure:
Leg Movement Max
PKPD string, animations, speech etc…
Given the log file information, the system can parse it and get certain data. In order to replay the animation, the system has to get the timestamp of the vent and response animation name. There is no way to record all the animation so the only way to replay is to trigger animations based on the log. Also, corresponding PKPD info should be updating in the AAR interface during replay.
AAR can replay the animation like a media player. User should be able to go back and forth in all the events easily like a video media player.